Google Answers Why Core Updates Can Roll Out In Stages via @sejournal, @martinibuster

Google’s John Mueller responded to a question about whether core updates roll out in stages or follow a fixed sequence. His answer offers some clarity about how core updates are rolled out and also about what some core updates actually are.

Question About Core Update Timing And Volatility

An SEO asked on Bluesky whether core updates behave like a single rollout that is then refined over time or if the different parts being updated are rolled out at different stages.

The question reflects a common observation that rankings tend to shift in waves during a rollout period, often lasting several weeks. This has led to speculation that updates may be deployed incrementally rather than all at once.

They asked:

“Given the timing, I want to ask a core update related question. Usually, we see waves of volatility throughout the 2-3 weeks of a rollout. Broadly, are different parts of core updated at different times? Or is it all reset at the beginning then iterated depending on the results?”

Core Updates Can Require Step-By-Step Deployment

Mueller explained that Google does not formally define or announce stages for core updates. He noted that these updates involve broad changes across multiple systems, which can require a step-by-step rollout rather than a single deployment.

He responded:

“We generally don’t announce “stages” of core updates.. Since these are significant, broad changes to our search algorithms and systems, sometimes they have to work step-by-step, rather than all at one time. (It’s also why they can take a while to be fully live.)”

Updates Depend On Systems And Teams Involved

Mueller next added that there is no single mechanism that governs how all core updates are released. Instead, updates reflect the work of different teams and systems, which can vary from one update to another.

He explained:

“I guess in short there’s not a single “core update machine” that’s clicked on (every update has the same flow), but rather we make the changes based on what the teams have been working on, and those systems & components can change from time to time.”

Core Updates May Roll Out Incrementally Rather Than All At Once

Mueller’s explanation suggests that the waves of volatility observed during core updates may correspond to incremental changes across different systems rather than a single reset followed by adjustments. Because updates are tied to multiple components, the rollout may progress in parts as those systems are updated and brought fully live.

This reflects a process where some changes are complex and require a more nuanced step-by-step rollout, rather than being released all at once, which may explain why ranking shifts can appear uneven during the rollout period.

Connection To Google’s Spam Update?

I don’t think that it was a coincidence that the March Core update followed closely after the recent March 2026 Spam Update. The reason I think that is because it’s logical for spam fighting to be a part of the bundle of changes made in a core algorithm update. That’s why Googlers sometimes say that a core update should surface more relevant content and less of the content that’s low quality.

So when Google announces a Spam Update, that stands out because either Google is making a major change to the infrastructure that Google’s core algorithm runs on or the spam update is meant to weed out specific forms of spam prior to rolling out a core algorithm update, to clear the table, so to speak. And that is what appears to have happened with the recent spam and core algorithm updates.

Comparison With Early Google Updates

Way back in the early days, around 25 years ago, Google used to have an update every month, offering a chance to see if new pages are indexed and ranked as well as seeing how existing pages are doing. The initial first days of the update saw widescale fluctuations which we (the members of WebmasterWorld forum) called the Google Dance.

Back then, it felt like updates were just Google adding more pages and re-ranking them. Then around the 2003 Florida update it became apparent that the actual ranking systems were being changed and the fluctuations could go on for months. That was probably the first time the SEO community noticed a different kind of update that was probably closer a core algorithm update.

In my opinion, one way to think of it is that Google’s indexing and ranking algorithms are like software. And then, there’s also hardware and software that are a part of the infrastructure that the indexing and ranking algorithms run on (like the operating system and hardware of your desktop or laptop).

That’s an oversimplification but it’s useful to me for visualizing what a core algorithm update might be. Most, if not all of it, is related to the indexing and ranking part. But I think sometimes there’s infrastructure-type changes going on that improve the indexing and ranking part.

Featured Image by Shutterstock/A9 STUDIO

https://www.searchenginejournal.com/google-answers-why-core-updates-can-roll-out-in-stages/571003/




The Science Of What AI Actually Rewards via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

In “The Science Of How AI Pays Attention,” I analyzed 1.2 million ChatGPT responses to understand exactly how AI reads a page. In “The Science Of How AI Picks Its Sources,” I analyzed 98,000 citation rows to understand which pages make it into the reading pool at all.

This is Part 3.

Where Part 1 told you where on a page AI looks, and Part 2 told you which pages AI routinely considers, this one tells you what AI actually rewards inside the content it reads.

The data clarifies:

  • Most AI SEO writing advice doesn’t hold at scale. There is no universal “write like this to get cited” formula – the signals that lift one industry’s citation rates can actively hurt another.
  • The entity types that predict citation are not the ones being targeted. DATE and NUMBER are universal positives. PRICE suppresses citation in five of six verticals, and KG-verified entities are a negative signal.
  • The one writing signal that holds across all seven verticals: Declarative language in your intro, +14% aggregate lift.
  • Heading structure is binary. Commit to the right number for your vertical or use none. Three to four headings are worse than zero in every vertical.
  • Corporate content dominates. Reddit doesn’t. AI citation behavior does not mirror what happened to organic search in 2023-2024.

1. Specific Writing Signals Influence Citation, While Others Harm It

While “The Science Of How AI Pays Attention” covers parts of the page and types of writing that influence ChatGPT visibility, I wanted to understand which writing-level signals – word count, structure, language style – predict higher AI citation rates across verticals.

Approach

  1. I compared high-cited pages (more than three unique prompt citations) vs. low-cited across seven writing metrics: word count, definitive language, hedging, list items, named entity density, and intro-specific signals.
  2. I analyzed the first 1,000 words for list item count, named entity density, intro definitive language token density, and intro number count.

Results: Across all verticals, definitive phrasing and including relevant entities matter. But most signals are flat.

Image Credit: Kevin Indig

What The Industry Patterns Showed

When splitting the data up by vertical, we suddenly see preferences:

  • Total word count was strongest in CRM/SaaS (1.59x).
  • Finance was an anomaly with word count: Shorter pages win (0.86x word count).
  • Definitive phrases in the first 1,000 characters were positive for most verticals.
  • Education is a signal void. Writing style explains almost nothing about citation likelihood there.
Image Credit: Kevin Indig

Top Takeaways

1. There is no universal “write like this to get cited” formula. For example, the signals that lift CRM/SaaS citation rates actively hurt Finance. Instead, match content format to vertical norms.

2. The one universal rule: open with a direct declarative statement. Not a question, not context-setting, not preamble. The form is “[X] is [Y]” or “[X] does [Z].” This is the only writing instruction that holds regardless of vertical, content type, or length.

3. LLMs “penalize” hedging in your intro. “This may help teams understand” performs worse than “Teams that do X see Y.” Remove qualifiers from your opening paragraph before any other optimization.

2. The Entity Types That Predict Citation Are Not The Ones Being Targeted

Most AEO advice focuses on named entities as a category: Pack in more known brand names, tool names, numbers. The cross-vertical entity type analysis below tells a more specific (and more useful) story.

Approach

  1. Ran Google’s Natural Language API on the first 1,000 characters (about 200-250 words) of each unique URL.
  2. Computed lift per entity type: % of high-cited pages with that type / % of low-cited pages.
  3. Analyzed 5,000 pages across seven verticals.

* A quick note on terminology: Google NLP classifies software products, apps, and SaaS tools as CONSUMER_GOOD, a legacy label from when the API was built for physical retail. Throughout this analysis, CONSUMER_GOOD means software/product entities.

Results: DATE and NUMBER are the most universal positive signals. Interestingly, PRICE is the strongest universal negative.

Image Credit: Kevin Indig
Image Credit: Kevin Indig

What The Industry Patterns Showed

  • DATE is the most universal positive signal, with the exception of Finance (0.65x).
  • NUMBER is the second most universal. Specific counts, metrics, and statistics in the intro consistently predict higher citation rates. Finance (0.98x) and Product Analytics (1.10x) mark the floor and ceiling of that range.
  • PRICE is the strongest universal negative. Pages that open with pricing signal commercial intent. Finance is the sole exception at 1.16x, likely because price here means fee percentages and rate comparisons, which are the actual reference data financial queries are looking for.
  • CONSUMER_GOOD (software/product entities) is mixed. In Healthcare, product entities signal established brands and tools. In Crypto, naming specific protocols and products is core to answering technical queries.
  • PHONE_NUMBER is a positive signal in Healthcare (1.41x) and Education (1.40x). In both cases, it is almost certainly a proxy for established brands/institutions/providers with real physical presence, not a literal signal to add phone numbers to your pages.

The Knowledge Graph inversion deserves its own note here:

  • The data showed that high-cited pages average 1.42 KG-verified entities vs. 1.75 for low-cited pages (lift: 0.81x).
  • Pages built around well-known, KG-verified entities (major brands, institutions, famous people) tend toward generic coverage, which isn’t preferred by ChatGPT.
  • High-cited pages are dense with specific, niche entities: a particular methodology, a precise statistic, a named comparison. Many of those niche entities have no KG entries at all. That specificity is what AI reaches for.

Top Takeaways

1. Add the publish date to your pages and aim to use at least one specific number in your content. That combination is the closest thing to a universal AI citation signal this dataset produced. But Finance gets there through price data and location specificity instead.

2. Avoid opening with pricing in non-finance verticals. Price-dominant intros correlate with lower citation rates.

3. KG presence and brand authority do not translate to an AI citation advantage. Chasing Wikipedia entries, brand panels, or KG verification is the wrong lever. Specific, niche entities (even ones without KG entries) outperform famous ones.

3. Heading Structure: Commit To One Or Don’t Bother

We know headings matter for citations from the previous two analyses. Next, I wanted to understand whether heading count predicts citation rates and whether the optimal structure varies by vertical.

Approach

  1. Counted total headings per page (H1+H2+H3) across all cited URLs.
  2. Grouped pages into 7 heading-count buckets: 0, 1-2, 3-4, 5-9, 10-19, 20-49, 50+.
  3. Computed high-cited rate (% of URLs that are high-cited) per bucket per vertical.

Results: Including more headings in your content is not universally better. The sweet spot depends on vertical and content type. One finding holds everywhere: Strangely, 3-4 headings are worse than zero.

Image Credit: Kevin Indig

What The Industry Patterns Showed

  • CRM/SaaS is the only vertical where the 20+ heading lift is confirmed: 12.7% high-cited rate at 20-49 headings vs. a 5.9% baseline. The 50+ bucket reaches 18.2%. Long structured reference pages and comparison guides with one section per tool outperform everything else here.
  • Healthcare inverts most sharply. The high-cited rate drops from 15.1% at zero headings to 2.5% at 20-49 headings. A page with 30 H2s on telehealth topics signals optimization intent, not clinical authority.
  • Finance peaks at 10-19 headings (29.4% high-cited rate). Structured but not exhaustive: think rate tables, regulatory breakdowns, and advisor comparison pages with moderate heading depth.
  • Crypto peaks at five to nine headings (34.7% high-cited rate). Technical documentation in this vertical tends toward dense prose with moderate navigation structure. Over-structuring breaks up the technical depth.
  • Education is flat across all heading counts, which is consistent with the writing signals finding. Heading structure explains almost nothing about citation likelihood in education content.
  • The three to four heading dead zone holds across every vertical without exception. Partial structure confuses AI navigation without providing the full benefit of a committed hierarchy.

Top Takeaways

1. The 20+ heading finding from Part 1 is a CRM/SaaS finding, not a universal one. Applying it to healthcare, education, or finance could actively suppress citation rates in those verticals.

2. The principle that holds everywhere: Commit to structure or don’t use it. The middle ground costs you in every vertical. A fully-structured page with the right heading depth outperforms a half-structured page in every vertical.

3. Use the optimal heading range for your vertical. Crypto: 5-9. Finance and Education: 10-19. CRM/SaaS: 20+ (with H3s). Healthcare: 0 or 5-9 at most. Long CRM reference pages with 50+ sections are the one case where maximum heading depth pays off.

4. UGC Doesn’t Dominate

The “Reddit effect” reshaped organic search between 2024 and 2025. I wanted to understand whether ChatGPT cites user-generated content (Reddit, forums, reviews) at meaningful rates or whether corporate/editorial content dominates.

The common industry assumption – that AI also preferentially cites community voices – is not what we found in the data.

Approach

  1. Classified these cited URLs as (1) UGC: Reddit, Quora, Stack Overflow, forum subdomains, Medium, Substack, Product Hunt, Tumblr, or (2) community/forum prefixes or corporate/editorial by domain.
  2. Computed citation share per category per vertical.
  3. Dataset: 98,217 citations across 7 verticals.

Results: Corporate content accounts for 94.7% of all citations. UGC is nearly invisible.

Image Credit: Kevin Indig

What The Industry Patterns Showed

  • Finance is the most corporate-locked vertical at 0.5% UGC. YMYL (Your Money, Your Life) content appears to systematically suppress citations to community opinion.
  • Healthcare sits at 1.8% UGC for the same structural reason. Clinical, telehealth, and HIPAA content draws almost exclusively from institutional sources.
  • Crypto has the highest UGC penetration in the dataset at 9.2%. Community-generated content (Reddit technical threads, Medium tutorials, developer forum posts) answers a meaningful proportion of analyzed queries. In a fast-moving technical niche where official documentation consistently lags, community posts fill the gap.
  • Product Analytics and HR Tech sit at 6.9% and 5.8% UGC. Both are verticals where Reddit comparison threads and product review communities provide genuine signal alongside corporate content.

Top Takeaways

1. The “Reddit effect” in SEO has not translated proportionally to AI citations. In most verticals, reddit.com captures 2-5% of total citations. This finding is in line with other industry research, including this report from Profound.

2. For finance and healthcare: UGC has near-zero AI citation value. Invest in structured, authoritative corporate content with clear sourcing. Community engagement may matter for other reasons, but it does not contribute meaningfully to AI citation share in these verticals.

3. For crypto, product analytics, and HR tech: Community presence has measurable citation value. Detailed Reddit comparison threads, technical Medium posts, and structured developer forum answers can supplement corporate content reach.

What This Means For How You Strategize For LLM Visibility

Across all three parts of this study, the consistent finding is that AI citation is not primarily a writing quality problem.

Part 2 showed it is a content architecture problem: Thin single-intent pages are structurally locked out regardless of how well they’re written. This piece shows the same logic applies inside the content itself.

The aggregate writing signals table is the most important chart in this analysis. Not because it shows you what to do, but because it shows how much of what the AI SEO/GEO/AEO industry is telling you doesn’t survive cross-vertical scrutiny. Word count, list density, named entity counts … all flat or negative at the aggregate. The signals that work are vertical-specific and smaller than our industry’s consensus implies.

The meta-lesson from this analysis is that findings are vertical (and probably topic) specific, which is no different in SEO.

This part concludes the Science of AI – for now. Because the AI ecosystem is constantly changing.

Methodology

We analyzed ~98,000 ChatGPT citation rows pulled from approximately 1.2 million ChatGPT responses from Gauge.

Because AI behaves differently depending on the topic, we isolated the data across seven distinct, verified verticals to ensure the findings weren’t skewed by one specific industry.

Analyzed verticals:

  • B2B SaaS
  • Finance
  • Healthcare
  • Education
  • Crypto
  • HR Tech
  • Product Analytics

Featured Image: CoreDESIGN/Shutterstock; Paulo Bobita/Search Engine Journal

https://www.searchenginejournal.com/the-science-of-what-ai-actually-rewards/570849/




So Your Traffic Tanked: What Smart CMOs Do Next

We’ve all seen it. Brands with healthy websites and excellent content have been watching their organic traffic from Google’s SERP erode for years. In a recent webinar hosted by Search Engine Journal, guest speaker Nikhil Lai, principal analyst of Performance Marketing for Forrester Research, estimated his clients are losing between 10 and 40% of organic and direct traffic year-over-year.

However, a stunning bright spot is this: Lai said referral traffic from answer engines is growing 40% month over month. Visitors arriving from those engines convert at two to four times the rate of traditional search visitors, spend three times as long on site, and arrive with queries averaging 23 words, compared to the three or four words that defined the last decade of search.

Lai asserted that the channel driving this shift deserves a seat at the CMO’s table. Answer engines influence brand perception before purchase intent forms, which makes answer engine optimization (AEO) a brand investment, and puts budget and measurement decisions at the CMO level.

Here is the strategic roadmap Lai laid out at SEJ Live. He highlighted the decisions, org structures, and measurement frameworks that will move AEO from a search team initiative to a C-suite priority.

Answer Engines Build Demand Before Buyers Know What They Want

Classic search captures intent that already exists. A user types “running shoes,” clicks a result, and evaluates options. Answer engines operate earlier and differently: users hold extended conversations with large datasets, rarely click through, and leave those sessions with specific brand associations formed across multiple follow-up questions.

A user who once searched “running shoes” now asks ChatGPT, “What’s the best shoe for overpronation with wide feet in cold weather on pavement?” They exit that conversation with a brand name in mind and search for it directly. Your brand appeared in an AI conversation before the user ever reached your site. Every day, demand generation is created from users’ research sessions.

The Forrester data Lai presented reinforces the quality of that exposure: Sessions on answer engines average 23 minutes, with users asking five to eight follow-up questions per session. Each turn is another brand impression. The click-through rate stays low; the conversion rate on the traffic that does arrive runs two to four times higher than search-sourced traffic, with stronger average order value and lifetime value.

Brand familiarity is built in answer engines before purchase intent crystallizes in the user’s mind.

SEO Is The Foundation Of AEO

The brands pulling back on SEO investment in response to AEO are making a costly mistake. Lai put it directly: 85 to 90% of current SEO best practices remain fully valid for answer engine visibility.

Google’s E-E-A-T framework (experience, expertise, authoritativeness, trustworthiness) still governs how quality is evaluated across every index. Site architecture, mobile load speed, structured data, and indexation hygiene all strengthen performance across every engine. Every alternative index (Bing’s, Brave’s) is benchmarked against Google’s for completeness. Every bot (GPTBot, Claudebot, Perplexitybot) is benchmarked against Googlebot for sophistication.

SEO is the infrastructure on which AEO runs. The shift is an expansion of scope and emphasis, but AEO is not a replacement of SEO fundamentals.

What changes is where additional effort goes: natural-language FAQ optimization, off-site authority building, pre-rendering for less sophisticated bots, and a measurement framework built around share of voice rather than click volume.

Bing Is Now Your Distribution Network For Every Non-Google Engine

Most answer engines outside Google draw primarily from Bing’s index.

Bing evaluates credibility by weighting what others say about your brand more heavily than what your own site claims. This explains why Reddit threads, Quora answers, Wikipedia entries, G2 reviews, YouTube videos, and Trustpilot pages dominate AI-generated answers. The off-site web has become the primary source of record for how AI describes your brand.

The immediate tactical implication: Push every sitemap update directly to Bing via the IndexNow protocol. This triggers Bingbot to crawl fresh content and feeds that content into Perplexity, ChatGPT, and the broader answer engine ecosystem faster than waiting for organic discovery.

Bing’s index remains the fastest route to non-Google answer engine visibility. Perplexity is building its own index (Sonar), and OpenAI has signaled plans to build or acquire one, but Bing is the distribution network that matters today.

AEO Requires Cross-Functional Ownership

AEO arguably spans more functions than SEO, with these three in common with SEO: content, web development, and paid search. AEO also more strongly interfaces with PR, brand marketing, and social media.

PR earns a seat because off-site authority outweighs on-site signals in AEO. Brand mentions in publications, influencer mentions, and third-party reviews all directly shape how answer engines describe your brand.

Social belongs in the room because Reddit threads and Facebook group discussions show up in AI-generated answers. Community management and reputation management, previously handled separately from SEO, are now integral to AEO. When your social listening data reaches content teams before they draft, the content responds to the questions buyers are actually asking. When it doesn’t, you’re optimizing for questions nobody asked.

Lai proposed two organizational models that work to capture the opportunities inherent in AEO:

  1. Center of Excellence: A senior SEO specialist evolves into an AEO evangelist, runs a COE, and publishes cross-functional standards: clear rules like “every piece of content must answer these five questions” or “every page must include author schema.”
  2. AI Orchestrator: A dedicated hire who builds agents to handle repeatable AEO tasks (schema implementation, JavaScript reduction, FAQ content creation) and governs the cross-functional workflow with published guidelines for all stakeholders.

The CMO’s decision is which model fits the organization’s scale, and whether to build it internally or partner with an agency that has already built the infrastructure.

The Content Strategy That Wins In AI Responses

Long-form skyscraper content is an ancient relic. Answer engines reward precise, specific answers to real questions, delivered succinctly and across multiple formats. Lai framed this as Forrester’s question-to-content framework: Every piece of content maps directly to a FAQ being asked on answer engines, including the follow-up questions that emerge within a single session.

Five content moves that produce results:

  1. Build surround-sound FAQ coverage. Create glossaries, FAQ pages, videos, and blog posts that address the same topic cluster from different angles. When Claudebot crawls 38,000 pages for every referred page visit (per Cloudflare data), each page it indexes is an opportunity to signal topical authority. Volume and variety matter.
  2. Publish direct competitor comparisons. Users ask answer engines to compare brands. Brands that create honest, data-backed comparison guides are gaining prominent visibility, because they directly answer the queries being asked that pit a brand against its competitors. This was once a taboo content format; it has become a competitive requirement.
  3. Treat off-site syndication as the new backlinking. Hosting AMAs on Reddit, answering questions on Quora, and contributing to industry publications that rank in AI responses all earn the off-site authority that answer engines weigh most heavily. Give third-party voices data and perspective they couldn’t generate themselves, and they will produce mentions that shape how AI describes your brand.
  4. Pre-render pages for bot access. The bots crawling your site lack the compute budget to render JavaScript-heavy pages. Claudebot’s 38,000:1 crawl-to-referral ratio compared to Googlebot’s 5:1 ratio reflects this sophistication gap. Pre-rendering a JavaScript-free version for bots while serving the full experience to human visitors ensures your content gets indexed across every engine. Over time, limit the amount of JavaScript on site. Have content directly in HTML so bots can understand your content, and index it more often. The more you’re crawled and indexed, the more visible you become.
  5. Create unique content. Lai said, “Being distinctive, differentiated, and unique will help your brand stand out in a sea of sameness. Implicit in all this is that you need a lot more content, greater content velocity and diversity, which means you can use AI to create content. Google won’t automatically penalize AI-created content unless it lacks the watermarks of human authorship. The syntax and diction have to be natural. Use AI to create content, but don’t make it seem AI-generated. Get down into the details. It’s not enough to say your product is great. Explain why in different temperatures, conditions, the thickness, and so on, to satisfy long-tail intent.”

Replace Legacy KPIs With Metrics That Predict Market Share

The internal conversation, Lai said, he hears most from Forrester clients: “The hardest part of this transition from SEO to AEO has been trying to convince management to not focus as much on CTR and traffic. Those were indicators of organic authority. They are no longer reliable indicators.

“The new KPIs to focus on are visibility and share of voice. Share of voice can be measured in many ways. The most common are citation share: how often is my brand cited, how often is my content linked, of the opportunities I have to be cited; and mention share: how often is my brand mentioned of the opportunities I have to be mentioned. I’m also seeing more clients look into citation attempts: how often is ChatGPT trying to cite my content, and are there things I can do on the back end of my site to make that citation attempt score go up? Those are the new indicators of authority,” said Lai.

These metrics connect directly to branded search volume, which Lai called “the single strongest leading indicator of market share growth.” The chain of logic to present to the board: higher citation and mention share drives more branded searches, which converts at higher rates, which compounds into measurable market share gains against competitors.

Lai said he expects Google to add citation metrics to Search Console once AI Max adoption reaches critical mass, and an OpenAI Analytics product before year-end.

For now, Lai suggested, the best course of action is to establish a baseline with your current SEO platform and track the directional trend. Lai contended that, to address concerns of accuracy within today’s popular SEO tools of answer engine mentions, even imperfect measurement reveals which content clusters are earning citations and which need rebuilding.

The Agentic Phase Starts The Clock On B2B Urgency

Answer engines are moving from conversation to action. The current phase, characterized by extended back-and-forth with large datasets, is the warm-up. The agentic phase is defined by engines’ booking, filing, researching, and purchasing on users’ behalf. This will mean fewer clicks, longer sessions, and richer intent signals available to advertisers.

For B2B CMOs, the urgency is immediate. Forrester research shows GenAI has already become the number one source of information for business buyers evaluating purchases of $1 million or more, coming in ahead of customer references, vendor websites, and social media. Your largest deals are being influenced by AI conversations before your sales team enters the picture.

AEO visibility in B2B is a current-pipeline variable that requires immediate attention.

The brands building complete search strategies now, covering answer engines, on-site conversational search, and structured data across every indexed channel, will own discovery and have greater control over brand perception in the next phase of buying behavior.

The window to gain an early-mover competitive advantage is shrinking, before AEO visibility becomes just another standard expectation everyone has to meet.

Key Takeaways For CMOs

  • Reframe the traffic story. Lower overall traffic volume paired with two-to-four-times higher conversion rates is a net performance gain. Build that case proactively before your CEO draws the wrong conclusion from a falling traffic chart.
  • Fund AEO as an upper-funnel brand channel. That means applying the same budget logic, measurement frameworks, and executive ownership you would bring to any major brand awareness investment, where success is measured in visibility, perception, and long-term share of voice rather than clicks and conversions.
  • Move to share-of-voice KPIs. Citation share and mention share drive branded search volume, which drives market share. Make that causal chain visible to your leadership team.
  • Assign cross-functional ownership with clear governance. Choose between a center of excellence or an AI orchestrator model and make that structural decision this quarter.
  • Prioritize off-site authority as a content strategy responsibility. Reddit, Quora, third-party publications, and YouTube shape AI’s perception of your brand. PR and social teams own the channels that matter most for AEO.
  • Push every sitemap update to Bing via IndexNow. Bing’s index feeds most non-Google answer engines. This is a 15-minute technical change with compounding distribution benefits.
  • Use AI to help with content, but always apply human editing for authority. Content that reads as machine-generated loses trust across every engine, including Google.

What Does A Smart CMO Do Next?

Start with a 90-day experiment using some or all of these strategies.

Audit your current citation and mention share in one category using your existing SEO platform. Identify three high-intent FAQ clusters where your brand should be visible and build surround-sound content for each: a dedicated FAQ page, a comparison guide, and one off-site piece in a publication that appears in AI responses. Push fresh sitemaps to Bing. Track citation share and branded search volume at 30, 60, and 90 days.

The data may make the investment case for broader rollout. If not, tweak your approach. The brands moving first will capture the highest-quality traffic at the lowest incremental cost, and set the citation baseline that becomes progressively harder for competitors to close.

The full webinar is available on demand.

More Resources:


Featured Image: Dmitry Demidovich/Shutterstock

https://www.searchenginejournal.com/so-your-traffic-tanked-what-smart-cmos-do-next/570708/




WordPress Delays Release Of Version 7.0 To Focus On Stability via @sejournal, @martinibuster

WordPress 7.0, previously scheduled for an April 9th release, will be delayed in order to stabilize the Real-Time Collaboration feature and assure that the release, a major milestone, will “target extreme stability.” Much is riding on WordPress 7.0 as it will ship with features that will usher in the age of AI-driven content management systems.

Prioritization Of Stability

Matt Mullenweg, co-founder of WordPress, commenting in the official Making WordPress Slack workspace, said the release should step back from its current trajectory and prioritize stability, calling for a longer pre-release phase to get the real-time collaboration (RTC) feature working correctly. The delay is expected to last weeks, not days, and is described as a one-off deviation from WordPress’s planned date-driven schedule.

Mullenweg posted:

“Given the scope and status of 7.0, I think we should go back to beta releases, get the new tables right, lock in everything we want for 7.0, and then start RCs again. Date-driven is still our default, but for this milestone release we want to target extreme stability and exciting updates, especially as AI-accelerated development is increasing people’s expectations for software.

This is a one-off, I think for future we should get back on the scheduled train, with an aim for 4-a-year in 2027, to hopefully reflect our AI-enabled ability to move faster.”

Extended Release Candidate Phase Replaces Beta Reversion

To avoid technical compatibility issues, the project will remain in the release candidate phase, extending the testing period through additional RC builds as needed.

The proposal to return to beta releases was rejected because it would break PHP version comparison behavior, plugin update logic, and tooling that depends on standard version sequencing. Continuing with RC builds preserves compatibility while allowing more time for testing and fixes.

Real-Time Collaboration

The delay is largely due to the Real-Time Collaboration feature, which introduces new database tables and changes how WordPress handles editing sessions. Contributors identified risks related to performance, data handling, and interactions with existing systems.

A primary concern is that real-time editing currently disables persistent post caches during active sessions, a performance issue the team is working to resolve before the final release.

Database Design Raises Performance Concerns

A key part of the discussion focused on how to structure the database for Real-Time Collaboration (RTC).  A proposed single RTC table would support 1. real-time editing updates and 2. synchronization. But some contributors noted that the workloads for real-time editing and synchronization are fundamentally different.

Real-time collaboration generates high-frequency, bursty writes that require low latency (meaning updates happen with very little delay).

While synchronization between environments involves slower, structured updates that may include full-table scans.

Combining both patterns within one table risks performance issues and added complexity. Contributors discussed separating these workloads into separate tables optimized for each use case, but no decision has been made.

Gap In Release Candidate Testing Raises Concern

The discussion in the WordPress Slack workspace also raised concern over whether there was enough real-world release candidate testing, and database schema changes increase the risk of failures during upgrades. The solution of using the Gutenberg plugin for testing was rejected because database changes could affect production sites and require complex migration logic. Instead, the project will use an extended RC phase to increase testing exposure and gather feedback from a wider group of users.

Versioning Constraints

The proposal to delay version 7.0 led to additional issues. PHP version comparison rules and related tooling complicated returning to beta versions. It was agreed that staying within the release candidate sequence (ergo RC1, RC2, RC3) avoids these issues while allowing continued iteration, so it was decided to continue with release candidates.

Future Release Cadence Remains

The delay is described as a temporary exception. Matt Mullenweg said the project intends to return to a regular release schedule, with a goal of delivering roughly four releases per year by 2027 as development speeds increase with AI-assisted workflows.

Implications For Developers And Users

Developers should expect continued changes to the Real-Time Collaboration feature and its supporting database structures during the extended release candidate phase. The longer testing period provides more time to identify issues before release. For site owners and hosts, the delay shows that WordPress is prioritizing stability over schedule while introducing more complex real-time and synchronization features.

Impact Of RTC On Hosting Environments

Something that wasn’t discussed but is a real issue is how real-time collaboration might affect web hosting providers. They need to test that feature to see if it introduces issues on shared hosting environments. While RTC will be shipping with the feature turned off by default, the impact of it being used by customers in a shared hosting environment is currently unknown. A spokesperson for managed WordPress hosting provider Kinsta told Search Engine Journal they are still testing. Given how the feature is still evolving, Kinsta and other web hosts will have to continue testing the upcoming WordPress release candidates.

I think most people will agree that the decision to delay the release of WordPress 7.0 is the right call.

https://www.searchenginejournal.com/wordpress-delays-release-of-version-7-0-to-focus-on-stability/570944/




How To Identify Which LLM Is Actually Working For You [Webinar] via @sejournal, @hethr_campbell

AI search is dominating the strategy conversation right now, and everyone is hearing the same thing from clients and directors: “What’s our AI search plan?”

The instinct is to optimize everywhere, ChatGPT, Perplexity, Gemini, and move fast. But before you reallocate budget or rewrite your GEO roadmap, there’s a more useful question to ask first:

Which LLM is actually driving conversions in your clients’ specific industry?

Join us for an upcoming expert panel webinar where we’ll dive into exactly that.

What You’ll Learn

In this webinar, Danielle Wood, Content & Creative Manager at CallRail, and Natalie Johnson, SEO & AI Visibility Expert & Founder of SweetGlow Marketing, will break down real conversion data by LLM and show how platform-level performance should shape your GEO strategy.

Specifically, you’ll walk away with:

  • Conversion data by LLM platform, so you know where high-intent traffic is actually coming from in each industry
  • A clear AI prioritization framework to stop spreading GEO effort equally and concentrate it where it converts
  • A reporting model that ties AI search activity to real business outcomes clients can see and trust

Why Attend?

You’ll finally be able to justify AI search investment; this session will give you the data and the framework to make that case and to implement the strongest, most successful AI search strategy possible.

Join us live to get your questions answered directly by the expert panel.

https://www.searchenginejournal.com/how-to-identify-which-llm-is-actually-working-for-your-or-your-clients-brand-webinar/570019/




Google: Pages Are Getting Larger & It Still Matters via @sejournal, @MattGSouthern

Google’s Gary Illyes and Martin Splitt used a recent episode of the Search Off the Record podcast to discuss whether webpages are getting too large and what that means for both users and crawlers.

The conversation started with a simple question: are websites getting fat? Splitt immediately pushed back on the framing, arguing that website-level size is meaningless. Individual page size is where the discussion belongs.

What The Data Shows

Splitt cited the 2025 Web Almanac from HTTP Archive, which found that the median mobile homepage weighed 845 KB in 2015. By July, that same median page had grown to 2,362 KB. That’s roughly a 3x increase over a decade.

Both agreed the growth was expected, given the complexity of modern web applications. But the numbers still surprised them.

Splitt noted the challenge of even defining “page weight” consistently, since different people interpret the term differently depending on whether they’re thinking about raw HTML, transferred bytes, or everything a browser needs to render a page.

How Google’s Crawl Limits Fit In

Illyes discussed a 15 MB default that applies across Google’s broader crawl infrastructure, where each URL gets its own limit, and referenced resources like CSS, JavaScript, and images are fetched separately.

That’s a different number from what appears in Google’s current Googlebot documentation. Google states that Googlebot for Google Search crawls the first 2 MB of a supported file type and the first 64 MB of a PDF.

Our previous coverage broke down the documentation update that clarified these figures earlier this year. Illyes and Splitt discussed the flexibility of these limits in a previous episode, noting that internal teams can override the defaults depending on what’s being crawled.

The Structured Data Question

One of the more interesting moments came when Illyes raised the topic of structured data and page bloat. He traced it back to a statement from Google co-founder Sergey Brin, who said early in Google’s history that machines should be able to figure out everything they need from text alone.

Illyes noted that structured data exists for machines, not users, and that adding the full range of Google’s supported structured data types to a page can add weight that visitors never see. He framed it as a tension rather than offering a clear answer on whether it’s a problem.

Does It Still Matter?

Splitt said yes. He acknowledged that his home internet connection is fast enough that page weight is irrelevant in his daily experience. But he said the picture changes when traveling to areas with slower connections, and noted that metered satellite internet made him rethink how much data websites transfer.

He suggested that page size growth may have outpaced improvements in median mobile connection speeds, though he said he’d need to verify that against actual data.

Illyes referenced prior studies suggesting that faster websites tend to have better retention and conversion rates, though the episode didn’t cite specific research.

Looking Ahead

Splitt said he plans to address specific techniques for reducing page size in a future episode.

Most pages are still unlikely to hit those limits, with the Web Almanac reporting a median mobile homepage size of 2,362 KB. But the broader trend of growing page weight affects both performance and accessibility for users on slower or metered connections.

https://www.searchenginejournal.com/google-pages-are-getting-larger-it-still-matters/570875/




New AI Jobs Index Ranks 784 Occupations By Loss Risk via @sejournal, @MattGSouthern

Jobs with the highest potential for AI-assisted productivity gains also face the highest projected job losses, according to a new index from Digital Planet at Tufts University’s Fletcher School.

The American AI Jobs Risk Index ranks 784 U.S. occupations, 530 metro areas, 50 states, and 20 industry sectors by vulnerability to AI-driven job loss.

All figures are model projections based on AI adoption scenarios, not actual layoffs or employment changes. The median scenario estimates 9.3 million jobs at risk, ranging from 2.7 million to 19.5 million depending on AI adoption speed.

Which Jobs Face The Highest Projected Risk

Writers and authors top the list of occupations at risk at 57%. Computer programmers and web and digital interface designers follow at 55% each. Editors are at 54%, and web developers at 46%.

Market research analysts and marketing specialists face a projected 35% job loss rate. Public relations specialists are at 37%. News analysts, reporters, and journalists face 35% risk.

Earlier analyses, such as the Anthropic Economic Index and Stanford’s “Canaries in the Coal Mine,” measured how accessible jobs are to AI. This analysis goes further by estimating how likely that exposure is to translate into projected job loss.

Augmentation & Loss Risk Go Together

Authors refer to the connection between jobs that benefit from AI-driven productivity gains and those expected to lose jobs as the “augmentation-displacement link.”

When AI increases individual workers’ efficiency, companies can produce the same output with fewer employees. This mainly affects entry-level and lower-seniority roles first, because companies can cut back on hiring rather than firing.

Writing, programming, web design, technical writing, and data analysis are where this pattern is most evident. Tasks in these fields are cognitive, language-intensive, and structured enough for large language models to manage.

By Industry

Average vulnerability across all industries is about 6%. Sectors with the highest projected job loss are Information (18%), Finance and Insurance (16%), and Professional, Scientific, and Technical Services (16%).

Software Developers, Management Analysts, and Market Research Analysts face the biggest total income losses. These three roles combine high pay with large workforces, accounting for a significant share of the projected $757 billion in total at-risk annual income.

What The Analysis Doesn’t Include

Note that job creation effects aren’t included in this version. The authors intend to add that data in future updates as they gather more evidence.

Additionally, regulatory constraints, union bargaining power, and occupational licensing requirements that could help slow job losses in some sectors are not part of this analysis. The authors emphasize that their forecasts are based on different scenarios rather than being definitive.

Why This Matters

There’s a common assumption among digital professionals that using AI to boost productivity protects their jobs. However, this data challenges that idea.

SEJ previously covered this tension in 2023 when Dr. Craig Froehle of the University of Cincinnati warned that companies not investing in employee retraining would see turnover costs double. The Tufts data puts numbers on the specific occupations where that pressure is building.

Looking Ahead

Updates to the American AI Jobs Risk Index will be made as AI capabilities and labor market conditions evolve. The authors mention that future versions will try to include job creation data along with loss estimates, providing a more complete view of AI’s overall impact on employment.

The methodology is available on the Digital Planet site, which also links to a data download page.


Featured Image: rudall30/Shutterstock

https://www.searchenginejournal.com/new-ai-jobs-index-ranks-784-occupations-by-loss-risk/570867/




Answer Engine Optimization: How To Get Your Content Into AI Responses via @sejournal, @slobodanmanic

This is Part 2 in a five-part series on optimizing websites for the agentic web. Part 1 covered the evolution from SEO to AAIO and why the shift matters. This article gets practical: how AI systems actually select content, and what you can do about it.

AI Doesn’t Rank Pages. It Selects Fragments.

Traditional search ranks whole pages. AI search does something fundamentally different.

Microsoft’s Krishna Madhavan, principal product manager on the Bing team, described the shift in October 2025: AI assistants “break content down, a process called parsing, into smaller, structured pieces that can be evaluated for authority and relevance. Those pieces are then assembled into answers, often drawing from multiple sources to create a single, coherent response.”

This is the core insight. AI doesn’t pick the best page and show it. It picks the best fragments from many pages and weaves them together. Your page might rank No. 1 on Google and still not get cited in an AI response if its content isn’t structured in fragments that AI can extract.

The numbers show the shift is real. According to the Conductor AEO/GEO Benchmarks Report (January 2026; 13,770 domains, 17 million AI responses), AI traffic now accounts for 1.08% of all website sessions, growing roughly 1% month over month. Microsoft reported that AI referrals to top websites spiked 357% year-over-year in June 2025, reaching 1.13 billion visits. Small numbers today, compounding fast.

One in four Google searches now triggers an AI Overview. In healthcare, it’s nearly one in two. The surface area is growing, and the content that fills these answers has to come from somewhere. The question is whether it comes from you.

The Research: What Actually Gets Cited

The academic research on what makes content citable in AI responses has matured rapidly. The foundational paper, “GEO: Generative Engine Optimization” (Princeton, IIT Delhi, Georgia Tech, published at KDD 2024), tested nine optimization strategies and found that GEO techniques could boost visibility by up to 40% in AI responses. The most effective single technique was citing credible sources, which produced a 115.1% visibility increase for websites that weren’t already ranking in the top positions.

A counterintuitive finding: Writing in an authoritative or persuasive tone did not improve AI visibility. AI systems don’t respond to rhetorical style. They respond to verifiable information.

Since then, 2025 brought a wave of follow-up research that tested these ideas on real production AI engines rather than simulated ones.

The University of Toronto study (September 2025) was the first large-scale analysis across ChatGPT, Perplexity, Gemini, and Claude. Their most striking finding: AI search overwhelmingly favors earned media. In consumer electronics, AI cited third-party authoritative sources 92.1% of the time, compared to Google’s 54.1%. Automotive showed a similar pattern at 81.9% versus 45.1%. In other words, it’s not just how you write content, but whose domain it appears on. Press coverage, product reviews on independent websites, and mentions on industry publications carry far more weight in AI responses than your own website.

Carnegie Mellon’s AutoGEO study (October 2025) used automated methods to discover what generative engines actually prefer. The results showed up to 50.99% improvement over the best baseline, with universal preferences emerging across engines: comprehensive topic coverage, factual accuracy with citations, clear logical structure with headings and lists, and direct answers to queries.

The GEO-16 framework (September 2025) analyzed 1,702 real citations from Brave, Google AI Overviews, and Perplexity. It identified 16 on-page quality factors that predict citation likelihood. The top three: metadata and freshness, semantic HTML, and structured data. Technical on-page factors matter as much as the quality of the writing itself.

And a reality check from Columbia and MIT’s ecommerce study (November 2025): of 15 common content rewriting heuristics, 10 produced negligible or negative results. The optimization strategies that did work converged toward truthfulness, user intent alignment, and competitive differentiation. Not tricks. Substance.

The overall pattern across all of this research: AI systems reward clarity, factual accuracy, and structure. They don’t reward marketing language, persuasion tactics, or keyword density.

Content Structure That Earns Citations

Based on the research and official guidance from Microsoft and Google, here’s what structurally makes content citable.

Heading hierarchy matters more than ever. Use descriptive H2 and H3 headings that each cover one specific idea. Microsoft lists strong headings as “signals that help AI know where a complete idea starts and ends.” Vague headings like “Learn More” or “Overview” give AI nothing to work with. A heading like “How AI parses content differently than search engines” tells the system exactly what the section contains.

Q&A format is native to AI. Write questions as headings with direct answers below them. Microsoft notes that “assistants can often lift these pairs word for word into AI-generated responses.” If your content answers the question someone asks an AI, and it’s structured as a clear question-and-answer pair, you’ve made the AI’s job easy.

Make content snippable. Bulleted and numbered lists, comparison tables, step-by-step instructions. These formats give AI clean, extractable fragments. A paragraph buried in a wall of text is harder for AI to isolate than the same information presented as a three-item list.

Front-load the answer. Start sections with the key information, then provide context. If someone asks, “What temperature should I bake bread at?” and your content opens with a two-paragraph history of bread making before mentioning 375°F, you’ll lose the citation to a competitor who leads with the answer.

Keep sections self-contained. Each section should make sense on its own, without requiring the reader to have read the previous section. AI extracts fragments. If your fragment only makes sense in the context of the whole page, it won’t be selected.

An important technical note from Microsoft: “Don’t hide important answers in tabs or expandable menus: AI systems may not render hidden content, so key details can be skipped.” FAQ answers collapsed inside an expandable menu, product specs hidden behind tabs, content that requires interaction to reveal: it may all be invisible to AI. If information is important, it needs to be in the visible HTML.

Authority Signals For AI

E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) isn’t just a Google concept anymore. It’s what AI systems look for across the board, even if they don’t use the term.

Microsoft’s October 2025 guidance describes the baseline: success starts with content that is “fresh, authoritative, structured, and semantically clear.” On the clarity side, they’re specific: “avoid vague language. Terms like innovative or eco mean little without specifics. Instead, anchor claims in measurable facts.” Saying something is “next-gen” or “cutting-edge” without context leaves AI unsure how to classify it.

The research backs this up. The original GEO paper found that writing in a persuasive or authoritative tone did not improve AI visibility. Facts and cited sources did. Marketing language doesn’t impress algorithms.

This connects to the University of Toronto’s finding about earned media dominance. AI systems trust third-party validation more than self-promotion. In consumer electronics, AI cited third-party authoritative sources 92.1% of the time compared to Google’s 54.1%. The implication: getting your expertise published on industry websites, earning press coverage, and building a presence on authoritative platforms matters more for AI visibility than perfecting the copy on your own site.

Freshness is a signal, not a bonus. Stale content rarely gets cited. Krishna Madhavan said at Pubcon Cyber Week: “Stale or missing content will constrain the amount of retrieval we can do and push agents toward alternative sources.”

Schema Markup: From Text To Knowledge

Microsoft’s October 2025 post devotes an entire section to schema. They describe it as code that “turns plain text into structured data that machines can interpret with confidence.” Schema can label your content as a product, review, FAQ, or event, giving AI systems explicit context instead of forcing them to guess. Krishna Madhavan reinforced this at Pubcon: “Schemas are super useful. They help the system discern exactly what your information is without us having to guess.”

The GEO-16 framework confirms this from the academic side. Structured data was one of the top three factors predicting AI citation likelihood, alongside metadata/freshness and semantic HTML.

The schema types that matter most for AI visibility:

  • FAQPage for question-and-answer content (directly maps to how AI formats responses).
  • HowTo for step-by-step instructions.
  • Product with Offer, AggregateRating, and Review for ecommerce.
  • Article/BlogPosting for content with clear authorship and dates.
  • Organization for business identity.

Pair structured data with IndexNow for freshness. As the Bing Webmaster Blog put it: “IndexNow tells search engines that something has changed, while structured data tells them what has changed. Together, they improve both speed and accuracy in indexing.”

Crawler Permissions: Who Gets In

AI search engines use distinct crawlers, and most let you control training and search access separately. Here’s who to allow.

Bot Platform Purpose Robots.txt Token
OAI-SearchBot ChatGPT Search index OAI-SearchBot
GPTBot OpenAI Model training GPTBot
ChatGPT-User ChatGPT On-demand browsing ChatGPT-User
Bingbot Microsoft Copilot Search + AI Bingbot
Googlebot Google AI Overviews Search + AI Googlebot
Google-Extended Google Gemini training Google-Extended
PerplexityBot Perplexity Search + index PerplexityBot
Perplexity-User Perplexity On-demand browsing Perplexity-User
ClaudeBot Anthropic Training + retrieval ClaudeBot

A sensible robots.txt configuration might allow search crawlers while blocking training:

User-agent: OAI-SearchBot
Allow: / User-agent: ChatGPT-User
Allow: / User-agent: GPTBot
Disallow: / User-agent: Google-Extended
Disallow: /

OpenAI provides the cleanest bot separation. You can allow OAI-SearchBot (so your content appears in ChatGPT search) while blocking GPTBot (so it’s not used for model training). Google’s controls are less granular: blocking Google-Extended prevents Gemini training but has no effect on AI Overviews, which use Googlebot.

OpenAI also offers the most specific technical recommendation of any AI search provider. For their Atlas browser (which uses a standard Chrome user agent, not a bot identifier), they recommend following WAI-ARIA best practices: “Add descriptive roles, labels, and states to interactive elements like buttons, menus, and forms. This helps ChatGPT recognize what each element does and interact with your site more accurately.” Accessibility and AI agent compatibility are the same work.

A caveat on Perplexity: while their documentation states they respect robots.txt, Cloudflare documented in August 2025 that Perplexity uses undeclared crawlers with rotating IPs and spoofed browser user agents to bypass no-crawl directives. This is a contested claim, but it’s worth knowing.

For revenue, Perplexity is the only platform currently offering publisher compensation. Their Comet Plus program provides an 80/20 revenue split (publishers keep 80%) across direct visits, search citations, and agent actions.

Google Vs. Microsoft: Two Philosophies

The contrast between Google and Microsoft on AEO is striking enough to be its own story.

Google says: just do good SEO. Their official documentation is deliberately minimalist: “There are no additional requirements to appear in AI Overviews or AI Mode, nor other special optimizations necessary.” They add that you “don’t need to create new machine readable files, AI text files, or markup to appear in these features.”

Google recommends helpful, reliable, people-first content demonstrating E-E-A-T. Standard structured data. Good page experience. Technical basics. Nothing AI-specific.

Microsoft says: here’s the playbook. Their October 2025 blog post and January 2026 guide provide detailed, actionable guidance. Specific heading structures. Schema recommendations. Content formatting rules. Concrete examples (an AEO product description vs. a GEO product description). Warnings about content hidden in tabs and expandable menus. A framework for thinking about crawled data, product feeds, and live website data as three distinct layers.

What explains the difference? Partly market position. Google dominates search and has less incentive to help publishers optimize for AI features that might reduce clicks to their websites. Microsoft, with Bing’s roughly 8% market share, benefits from providing publishers with reasons to optimize specifically for their ecosystem.

But there’s a practical takeaway: Microsoft’s guidance isn’t Bing-specific. The principles of structured content, clear headings, snippable formats, schema markup, and expert authority are universal. Following Microsoft’s playbook improves your content for every AI system, including Google’s. Google just won’t tell you that.

Measuring AI Visibility

This is the hard part. Traditional SEO has Google Search Console. AI visibility is still fragmented.

Ahrefs analyzed 1.9 million citations from 1 million AI Overviews and found that 76% of citations come from pages already ranking in Google’s top 10. The median ranking for the most-cited URLs was position 2. Traditional ranking still matters for AI citation, but being No. 1 is “a coin flip at best” for getting cited.

The traffic impact is significant. Ahrefs found that AI Overviews correlate with 58% lower click-through rates for the No. 1 position. Seer Interactive reported a 61% organic CTR drop for queries with AI Overviews. But being cited within the AI Overview gives 35% more organic clicks compared to not being cited. Citation is the new ranking.

For tracking, the tool landscape is emerging:

Tool What It Tracks Starting Price
Profound Citations across ChatGPT, Perplexity, Copilot, Google AIOs From $99/mo
Peec.ai Brand mentions across ChatGPT, Gemini, Claude, Perplexity From ~$95/mo
Advanced Web Ranking AIO presence tracking in Google Included in plans
Bing Webmaster Tools AI Performance Report for Copilot Free

Bing Webmaster Tools is the easiest starting point. It’s free, and the new AI Performance Report shows how your content performs in Copilot citations. For ChatGPT specifically, track utm_source=chatgpt.com in your analytics. OpenAI automatically appends this to referral URLs.

Conductor’s January 2026 report found that 87.4% of AI referral traffic comes from ChatGPT. That’s one platform dominating the space, which makes tracking it particularly important.

Key Takeaways

  • AI selects fragments, not pages. Structure your content in self-contained, extractable sections with descriptive headings that signal where each idea starts and ends.
  • Clarity beats persuasion. Factual accuracy, cited sources, and direct answers outperform authoritative tone and marketing language. The research consistently shows this.
  • Earned media dominates brand content in AI citations. Press coverage, third-party reviews, and authoritative mentions on other websites carry more weight than your own pages. Build presence beyond your domain.
  • Schema markup is a force multiplier. FAQPage, HowTo, Product, and Article schemas make your content machine-readable. Pair with IndexNow for freshness.
  • Follow Microsoft’s playbook, even for Google. Google says “just do good SEO.” Microsoft provides specific, actionable guidance that improves content for every AI system, Google’s included.
  • Separate training from search in your robots.txt. Allow search crawlers (OAI-SearchBot, Bingbot, PerplexityBot) while blocking training crawlers (GPTBot, Google-Extended) if that’s your preference. You have more control than you might think.
  • Track AI visibility now. Use Bing Webmaster Tools (free), monitor utm_source=chatgpt.com in analytics, and consider dedicated tools as the measurement space matures.

Traditional SEO asked: “How do I rank?” AEO asks: “How do I become the fragment that gets selected?” The answer isn’t a single trick. It’s clear structure, verifiable expertise, and content that AI can confidently extract and cite.

Up next in Part 3: the protocols powering the agentic web, including MCP, A2A, NLWeb, and AGENTS.md, and how they fit together.

More Resources:


This was originally published on No Hacks.


Featured Image: Meepian Graphic/Shutterstock

https://www.searchenginejournal.com/answer-engine-optimization-how-to-get-your-content-into-ai-responses/570055/




Google Takes Search Live Global With Gemini 3.1 Flash Live via @sejournal, @MattGSouthern

Google is expanding Search Live to more than 200 countries and territories, bringing voice and camera conversations to AI Mode globally.

The expansion is powered by Gemini 3.1 Flash Live, a new audio model that Google calls its highest-quality yet. It’s inherently multilingual, so you can speak with Search in your preferred language without switching settings.

Search Live was previously limited to the U.S.

What’s Changing

Search Live lets you talk to Google Search inside AI Mode instead of typing a query. You ask a question out loud and get an audio response, then continue with follow-ups. Web links appear on screen alongside the voice responses.

The feature also supports camera input. Point your phone at a product label or a piece of equipment and ask Search about what it sees. Google Lens users can tap a “Live” option to start a conversation about what’s in the camera view.

With today’s expansion, both voice and camera capabilities are available in every market where AI Mode is active.

The New Model

Gemini 3.1 Flash Live replaces the previous audio model powering Search Live. Google published benchmark results alongside the announcement.

Gemini Live can now follow a conversation thread for twice as long as the previous model, according to Google. Though the company didn’t specify what the previous limit was.

Beyond Search, 3.1 Flash Live is available to developers in preview through the Gemini Live API in Google AI Studio.

Why This Matters

Search Live turns search into a spoken conversation with camera input. Until now, the feature was limited to U.S. users. Today’s expansion makes it available in the markets where AI Mode is live, across more than 200 countries and territories.

There’s no public data yet on how many people use Search Live or how it affects query volume. But Google has been building toward this for the past year. The company launched Search Live in June, added video input in July, and upgraded to Gemini 2.5 Flash Native Audio in December. Each update expanded what the feature can do and who can use it.

Looking Ahead

Google didn’t announce additional Search Live features alongside this expansion. The focus is on geographic reach and the underlying model upgrade.

How the model performs in production across different languages and markets will be worth watching as adoption data becomes available.

https://www.searchenginejournal.com/google-takes-search-live-global-with-gemini-3-1-flash-live/570602/




Google’s Liz Reid Says LLMs Unlock Audio And Video Indexing via @sejournal, @MattGSouthern

In a podcast interview, Google VP of Search Liz Reid described two ways LLMs are changing what Google can index and how it ranks results for individual users.

Reid told the Access Podcast that multimodal AI models now allow Google to understand audio and video content at a deeper level than was previously possible. She also pointed to a future where search results adapt based on a user’s paid subscriptions.

What’s New

Multimodal Understanding Is Expanding What Google Can Index

Reid said LLMs being multimodal has opened up content formats that Google previously struggled to process.

Reid told the hosts:

“The great thing about LLM is they’re multimodal. So we can actually understand audio content and video content actually at a level we couldn’t years ago.”

She went further, describing how Google can now go beyond basic transcription when analyzing video.

“Now you can understand audio much better. Now you can understand video much better. Now you can understand not just the video transcription but like what is the video more about or what’s the style or other things like that.”

Reid connected this to a long-standing gap in how search works for non-English speakers. For users in India who speak Hindi or other languages, the web often lacks the information they need in their language. Previously, translating all web content into every language wasn’t scalable. LLMs changed that.

“Now with an LLM, you can take information in one language, understand it, and then output in another language. Like that opens up information.”

Google has been moving in this direction for some time. In October 2025, Reid told the Wall Street Journal that Google had adjusted ranking to surface more short-form video, forums, and user-generated content.

The comments also add context to Google’s Audio Overviews experiment launched in Search Labs last June, which generates spoken AI summaries of search results.

That wasn’t possible a few years ago. In 2021, Google and KQED tested whether audio content could be made searchable and found that speech-to-text accuracy wasn’t high enough, particularly for proper nouns and regional references. Reid’s comments suggest that the barrier has fallen.

Subscription-Aware Search Could Change How Results Are Personalized

Reid also outlined a direction for personalization that goes beyond Google’s existing Preferred Sources feature.

She told the hosts Google wants to surface content from outlets a user pays for, not paywalled results from sources they can’t access.

“If you love this source and you do have a relationship with it then that content should surface more easily for you on Google.”

Reid gave a practical example. Say 20 interviews on a topic are paywalled but a user subscribes to one outlet. Google should make it easy to find the one they can read.

“We should surface the one that they’re paying for and not the six that they can’t get access to more.”

She suggested the company has “taken small steps so far but want to do more” to strengthen how audiences and trusted sources connect through search. She also mentioned the possibility of micropayments for individual articles, though she acknowledged that model hasn’t taken off historically.

Google expanded Preferred Sources globally for English-language users in December, and announced a feature that highlights links from users’ paid news subscriptions. Google said it would prioritize those links in a dedicated carousel, starting in the Gemini app, with AI Overviews and AI Mode to follow. At the time, Google said users who pick a preferred source click to that site twice as often on average. Reid’s comments suggest the company sees subscription-aware search as a broader evolution of that same direction.

Why This Matters

The multimodal capabilities Reid pointed to expand which content formats get discovered through search. Podcasts, video series, and audio-first content have historically been harder for Google to evaluate beyond metadata and transcripts. Google’s growing ability to assess relevance and depth from audio and video directly changes who can be found through search and how.

For brands and creators investing in non-text formats, Google’s ability to surface that work is catching up to where the audience already is.

The subscription-aware personalization direction matters for any publisher with a paywall or membership model. Search results that adapt to what individual users pay for would tighten the connection between subscriber retention and search visibility. Paywalled content could perform better for the audience that matters most to the publisher, rather than being deprioritized because most users can’t access it.

Looking Ahead

Reid didn’t attach timelines to either development. The multimodal indexing capabilities she talked about appear to be current, while the subscription-aware personalization is a stated direction with some existing features already in place.

Google I/O is scheduled for May 19-20. Reid said on the podcast that the company is “actively building” but that the pace of AI development means some features could come together as late as April and still make it to the stage.


Featured Image: Mawaddah F/Shutterstock

https://www.searchenginejournal.com/googles-liz-reid-says-llms-unlock-audio-and-video-indexing/569009/