Google Lists 9 Scenarios That Explain How It Picks Canonical URLs via @sejournal, @martinibuster

Google’s John Mueller answered a question on Reddit about why Google picks one web page over another when multiple pages have duplicate content, also explaining why Google sometimes appears to pick the wrong URL as the canonical.

Canonical URLs

The word canonical was previously mostly used in the religious sense to describe what writings or beliefs were recognized to be authoritative. In the SEO community, the word is used to refer to which URL is the true web page when multiple web pages share the same or similar content.

Google enables site owners and SEOs to provide a hint of which URL is the canonical with the use of an HTML attribute called rel=canonical. SEOs often refer to rel=canonical as an HTML element, but it’s not. Rel=canonical is an attribute of the <link> element. An HTML element is a building block for a web page. An attribute is markup that modifies the element.

Why Google Picks One URL Over Another

A person on Reddit asked Mueller to provide a deeper dive on the reasons why Google picks one URL over another.

They asked:

“Hey John, can I please ask you to go a little deeper on this? Let’s say I want to understand why Google thinks two pages are duplicate and it chooses one over the other and the reason is not really in plain sight. What can one do to better understand why a page is chosen over another if they cover different topics? Like, IDK, red panda and “regular” panda 🐼. TY!!”

Mueller answered with about nine different reasons why Google chooses one page over another, including the technical reasons why Google appears to get it wrong but in reality it’s someetimes due to something that the site owner over SEO overlooked.

Here are the nine reasons he cited for canonical choices:

  1. Exact duplicate content
    The pages are fully identical, leaving no meaningful signal to distinguish one URL from another.
  2. Substantial duplication in main content
    A large portion of the primary content overlaps across pages, such as the same article appearing in multiple places.
  3. Too little unique main content relative to template content
    The page’s unique content is minimal, so repeated elements like navigation, menus, or layout dominate and make pages appear effectively the same.
  4. URL parameter patterns inferred as duplicates
    When multiple parameterized URLs are known to return the same content, Google may generalize that pattern and treat similar parameter variations as duplicates.
  5. Mobile version used for comparison
    Google may evaluate the mobile version instead of the desktop version, which can lead to duplication assessments that differ from what is manually checked.
  6. Googlebot-visible version used for evaluation
    Canonical decisions are based on what Googlebot actually receives, not necessarily what users see.
  7. Serving Googlebot alternate or non-content pages
    If Googlebot is shown bot challenges, pseudo-error pages, or other generic responses, those may match previously seen content and be treated as duplicates.
  8. Failure to render JavaScript content
    When Google cannot render the page, it may rely on the base HTML shell, which can be identical across pages and trigger duplication.
  9. Ambiguity or misclassification in the system
    In some cases, a URL may be treated as duplicate simply because it appears “misplaced” or due to limitations in how the system interprets similarity.

Here’s Mueller’s complete answer:

“There is no tool that tells you why something was considered duplicate – over the years people often get a feel for it, but it’s not always obvious. Matt’s video “How does Google handle duplicate content?” is a good starter, even now.

Some of the reasons why things are considered duplicate are (these have all been mentioned in various places – duplicate content about duplicate content if you will :-)): exact duplicate (everything is duplicate), partial match (a large part is duplicate, for example, when you have the same post on two blogs; sometimes there’s also just not a lot of content to go on, for example if you have a giant menu and a tiny blog post), or – this is harder – when the URL looks like it would be duplicate based on the duplicates found elsewhere on the site (for example, if /page?tmp=1234 and /page?tmp=3458 are the same, probably /page?tmp=9339 is too — this can be tricky & end up wrong with multiple parameters, is /page?tmp=1234&city=detroit the same too? how about /page?tmp=2123&city=chicago ?).

Two reasons I’ve seen people get thrown off are: we use the mobile version (people generally check on desktop), and we use the version Googlebot sees (and if you show Googlebot a bot-challenge or some other pseudo-error-page, chances are we’ve seen that before and might consider it a duplicate). Also, we use the rendered version – but this means we need to be able to render your page if it’s using a JS framework for the content (if we can’t render it, we might take the bootstrap HTML page and, chances are it’ll be duplicate).

It happens that these systems aren’t perfect in picking duplicate content, sometimes it’s also just that the alternative URL feels obviously misplaced. Sometimes that settles down over time (as our systems recognize that things are really different), sometimes it doesn’t.

If it’s similar content then users can still find their way to it, so it’s generally not that terrible. It’s pretty rare that we end up escalating a wrong duplicate – over the years the teams have done a fantastic job with these systems; most of the weird ones are unproblematic, often it’s just some weird error page that’s hard to spot.”

Takeaway

Mueller offered a deep dive into the reasons why Google chooses canonicals. He described the process of choosing canonicals as like a fuzzy sorting system built from overlapping signals, with Google comparing content, URL patterns, rendered output, and crawler-visible versions, while borderline classifications (“weird ones”) are given a pass because they don’t pose a problem.

Featured Image by Shutterstock/Garun .Prdt

https://www.searchenginejournal.com/how-google-picks-canonical-urls/571914/




New Google Spam Policy Targets Back Button Hijacking via @sejournal, @MattGSouthern

Google added a new section to its spam policies designating “back button hijacking” as an explicit violation under the malicious practices category. Enforcement begins on June 15, giving websites two months to make changes.

Google published a blog post explaining the policy. It also updated the spam policies documentation to list back-button hijacking alongside malware and unwanted software as a malicious practice.

What Is Back Button Hijacking

Back button hijacking occurs when a site interferes with browser navigation and prevents users from returning to the previous page. Google’s blog post describes several ways this can happen.

Users might be sent to pages they never visited. They might see unsolicited recommendations or ads. Or they might be unable to navigate back at all.

Google wrote in the blog post:

“When a user clicks the ‘back’ button in the browser, they have a clear expectation: they want to return to the previous page. Back button hijacking breaks this fundamental expectation.”

Why Google Is Acting Now

Google said it’s seen an increase in this behavior across the web. The blog post noted that Google has previously warned against inserting deceptive pages into browser history, referencing a 2013 post on the topic, and said the behavior “has always been against” Google Search Essentials.

Google wrote:

“People report feeling manipulated and eventually less willing to visit unfamiliar sites.”

What Enforcement Looks Like

Sites involved in back button hijacking risk manual spam penalties or automated demotions, both of which can lower their visibility in Google Search results.

Google is giving a two-month grace period before enforcement starts on June 15. This follows a similar pattern to the March 2024 spam policy expansion, which also gave sites two months to comply with the new site reputation abuse policy.

Third-Party Code As A Source

Google’s blog post acknowledges that some back-button hijacking may not originate from the site owner’s code.

Google wrote:

“Some instances of back button hijacking may originate from the site’s included libraries or advertising platform.”

Google’s wording indicates sites can be affected even if issues come from third-party libraries or ad platforms, placing responsibility on websites to review what runs on their pages.

How This Fits Into Google’s Spam Policy Framework

The addition falls under Google’s category of malicious practices. That section discusses behaviors causing a gap between user expectations and experiences, including malware distribution and unwanted software installation. Google expanded the existing spam policy category instead of creating a new one.

The March 2026 spam update completed its rollout less than three weeks ago. That update enforced existing policies without adding new ones. Today’s announcement adds new policy language ahead of the June 15 enforcement date.

Why This Matters

Sites using advertising scripts, content recommendation widgets, or third-party engagement tools should audit those integrations before June 15. Any script that manipulates browser history or prevents normal back-button navigation is now a potential spam violation.

The two-month window is the compliance period. After June 15, Google can take manual or automated action.

Sites that receive a manual action can submit a reconsideration request through Search Console after fixing the issue.

Looking Ahead

Google hasn’t indicated whether enforcement will come through a dedicated spam update or through ongoing SpamBrain and manual review.

https://www.searchenginejournal.com/new-google-spam-policy-targets-back-button-hijacking/571859/




The Dangerous Seduction Of Click-Chasing

It works, until it doesn’t.

The Chase

Imagine you’re a news publisher. Your journalism is good, you write original stories, and your website is relatively popular within your editorial niche.

Revenue is earned primarily via advertising. Google search is your biggest source of visitors.

Management demands growth, and elevates traffic to the throne of all key performance indicators. Engagement, loyalty, subscriptions – these are now secondary objectives. Getting the click, that is the driving purpose.

You look at your channels to determine where growth is most likely to come from. Search seems the most viable channel. So, you make SEO a key focus area.

As part of your SEO efforts, you come across specific tactics that cause your stories to generate more clicks. These tactics are very effective. Applying them to your stories results in significantly more traffic than before.

You’ve caught the scent. The chase for clicks is on.

These tactics demand that your stories focus on clicks above all. Within the context of these SEO-first tactics, every story is a traffic opportunity.

At first, you manage to apply these tactics within the framework of your existing journalism. Your stories are still good and unique, and you apply SEO as best you can to ensure each gets the best chance of generating traffic. It works, and your traffic grows.

But the pressures of management demand more. More growth. More revenue. More ad impressions. More traffic.

The newsroom submits. Stories are commissioned only if they have sufficient traffic potential. Journalists learn to just write stories that generate clicks. Headlines are crafted to maximize click-through rates, not to inform readers. You write multiple stories about the exact same news, each with a slightly different angle. Articles bury the lede.

Everything is subject to the chase.

Your scope expands. You don’t just write stories within your established specialism – you branch out. Different topics. New sections. Product reviews and recommendations. Listicles.

Everything is fair game, as long as it generates clicks.

And it works. Oh boy, does it work.

Image Credit: Barry Adams

The flywheel gathers momentum. You learn exactly what people click on, how to craft the perfect headline, select the ideal image, find the precise angle that will make people stop scrolling and tap on your article.

Traffic keeps growing.

But, somehow, you don’t feel entirely at ease. Because you know that, when you look at your content objectively, something has been lost. Your site used to be about journalism, about informing readers, improving knowledge and awareness, and enabling policies and decisions. It used to be good.

Now, none of that really matters anymore. Your site is about clicks. Everything else is secondary.

But management is happy. Revenue is up. Profits surge. So it’s alright, isn’t it?

Isn’t it?

Image Credit: Barry Adams

Google rolls out a core algorithm update. You lose 20% of your search traffic overnight. It’s a shot across the bow. A warning. But you ignore it. You focus on the chase even more. Tighter content focus. More variations of the same stories. Better SEO.

Traffic stabilizes. No more growth, but you’re chugging along nicely. You maybe change a few things, try to get back onto a growth curve. Nothing works, but you’re not losing either. Things look stable. You can live with this.

Then the next Google core update hits. You lose 50% of your current search traffic. It’s code red in the newsroom. All hands on deck.

How do we recover? How do we get this traffic back? It’s our traffic, Google owes us!

You do what you’ve gotten very good at. You SEO the hell out of your site. Everything is optimized and maximized. Your technical SEO goes from “that will do” to a state of such perfection it could make a web nerd cry. Your content output becomes even more focused on areas with the biggest traffic potential.

In the chase for revenue, you try alternative monetization. Affiliate content. Gambling promos. Advertorials. More listicles. More product recommendations. More of everything.

Then the next update arrives. You lose again.

And the next one.

And the next one.

You lose, almost every single time.

Image Credit: Barry Adams

It worked. Until it didn’t.

And now your site is on Google’s shitlist. Your relentless focus on growth at the expense of quality has accumulated so many negative signals that Google will not allow you to return to your previous heights.

You know none of what you try will work. Those traffic graphs won’t go back up. Every Google core update causes a new surge of existential dread: How much will we lose this time?

And yet, you still chase. You’ve long since lost the scent. But the chase still rules. Because you know that, to stop the chase, something needs to change. Something big and profound. And making that change will be painful. Extremely painful.

But do you have a choice?

Hindsight

I wish this scenario was unique, a singular publisher making the mistake of focusing on traffic at the expense of quality. But it’s a tragically common theme, played out in digital newsrooms hundreds of times over the last 10 years.

In every instance, at some point, the seductive appeal of traffic began to outweigh the journalistic principles of the organization. Compromises were made so growth could be achieved.

And because these compromises had the intended result – at first – there was nothing to caution the publisher from traveling further down this path.

Well, nothing besides Google shouting at every opportunity that you should focus on quality, not clicks.

Besides every SEO professional that has ever dealt with a bad algorithm update saying you should focus on quality, not clicks.

Besides your best journalists abandoning ship in favor of a quality-focused outlet or their own Substack.

Besides your own loyal readers abandoning your site because you stopped focusing on quality and went after clicks.

The writing has been on the wall, in huge capital letters, for the better part of a decade. Arguably, since 2018, when Google began rolling out algorithm updates to penalize low-effort content. If you’d been paying attention, none of this would have been a surprise.

Hey, maybe you did see it coming. But you weren’t able to make the required changes, because the clicks were still there. You were never going to deliberately abandon growth for some vague promise of sustainable traffic and audience loyalty.

If only you’d known that, once the Google hammer came down, the damage would be permanent. Maybe you wouldn’t have started the chase in the first place.

If only you’d known.

Recovery

When a site is so heavily affected by consecutive Google core updates, is there any hope of recovery? Can a website climb its way back to those vaulted traffic heights?

We need to be realistic and accept that those halcyon days of near-limitless traffic growth are not coming back. The ecosystem has changed. Growth is harder to achieve, and online news is working under a lower ceiling than ever before.

But recovery is possible, to an extent. You will never achieve the same traffic peaks as in your prime days, but you can claw back a significant chunk. Providing you are willing to do what it takes.

The recipe is simple, on paper: Everything you do should be in service of the reader.

Every story needs to be crafted to deliver maximum value for your readers. Every design element on your site needs to be optimized for the best user experience. Every headline must be informative first and foremost. Every article must deliver on its headline’s promise in spades. Every piece of content should serve to inform, educate, and delight your audience.

In short, your entire output should revolve around audience loyalty.

Not growth. Not traffic.

Loyalty.

Build a news platform so good that your readers don’t ever think about going anywhere else.

Of course, you still need traffic, but this must be a secondary concern. Start with your audience, and then apply layers on top of your stories to aid their traffic potential.

Your output should be focused on original journalism – not rehashing the same stories that others are reporting. If all you do is take someone else’s story and write different angles on it, you’re not doing journalism.

Provide breaking news, expert commentary, detailed analysis, and a deep focus on your editorial specialties.

And accept that your audience isn’t a singular entity, but consumes news on multiple platforms and in multiple formats. Video, podcasts, newsletters, social media, you name it. Fire on all channels, as best you can.

Sounds simple. But very few publishers I’ve spoken with have the internal fortitude for such drastic cultural changes in their online newsroom. Most of the publishers I consult with that were affected by core updates just want a list of quick wins, some easy fixes they can implement, and get their traffic back.

They want busy-work. They’re not interested in meaningful change. Because meaningful change is hard, and painful.

But also absolutely necessary.

That’s it for another edition. As always, thanks for reading and subscribing, and I’ll see you at the next one!

More Resources:


This post was originally published on SEO For Google News.


Featured Image: Roman Samborskyi/Shutterstock

https://www.searchenginejournal.com/the-dangerous-seduction-of-click-chasing/571533/




Google’s Task-Based Agentic Search Is Disrupting SEO Today, Not Tomorrow via @sejournal, @martinibuster

Google’s Sundar Pichai recently said that the future of Search is agentic, but what does that really mean? A recent tweet from Google’s search product lead shows what the new kind of task-based search looks like. It’s increasingly apparent that the internet is transitioning to a model where every person has their own agent running tasks on their behalf, experiencing an increasingly personal internet.

Search Is Becoming Task-Oriented

The internet, with search as the gateway to it, is a model where websites are indexed, ranked, and served to users who basically use the exact same queries to retrieve virtually the same sets of web pages. AI is starting to break that model because users are transitioning to researching topics, where a link to a website does not provide the clear answers users are gradually becoming conditioned to ask for. The internet was built to serve websites that users could go to and read stuff and to connect with others via social media.

What’s changing is that now people can use that same search box to do things, exactly as Pichai described. For example, Google recently announced the worldwide rollout of the ability to describe the needs for a restaurant reservation, and AI agents go out and fetch the information, including booking information.

Google’s Search Product Lead Rose Yao tweeted:

“Date nights and big group dinners just got a lot easier.

We’re thrilled to expand agentic restaurant booking in Search globally, including the UK and India!

Tell AI Mode your group size, time, and vibe—it scans multiple platforms simultaneously to find real-time, bookable spots.

No more app-switching. No more hassle. Just great food.”

That’s not search, that’s task completion. What was not stated is that restaurants will need to be able to interact with these agents, to provide information like available reservation slots, menu choices that evening, and at some point those websites will need to be able to book a reservation with the AI agent. This is not something that’s coming in the near future, it’s here right now.

That is exactly what Pichai was talking about when he recently described the future of search:

“I feel like in search, with every shift, you’re able to do more with it.

…If I fast forward, a lot of what are just information seeking queries will be agentic search. You will be completing tasks, you have many threads running.”

When asked if search will still be around in ten years, Pichai answered:

“Search would be an agent manager, right, in which you’re doing a lot of things.

…And I can see search doing versions of those things, and you’re getting a bunch of stuff done.”

Everyone Has Their Own Personal Internet

Cloudflare recently published an article that says the internet was the first way for humans to interact with online content, and that cloud infrastructure was the second adaptation that emerged to serve the needs of mobile devices. The next adaptation is wild and has implications for SEO because it introduces a hyper-personalized version of the web that impacts local SEO, shopping, and information retrieval.

AI agents are currently forced to use an internet infrastructure that’s built to serve humans. That’s the part that Cloudflare says is changing. But the more profound insight is that the old way, where millions of people asked the same question and got the same indexed answer, is going away. What’s replacing it is a hyper-personal experience of the web, where every person can run their own agent.

Cloudflare explains:

“Unlike every application that came before them, agents are one-to-one. Each agent is a unique instance. Serving one user, running one task. Where a traditional application follows the same execution path regardless of who’s using it, an agent requires its own execution environment: one where the LLM dictates the code path, calls tools dynamically, adjusts its approach, and persists until the task is done.

Think of it as the difference between a restaurant and a personal chef. A restaurant has a menu — a fixed set of options — and a kitchen optimized to churn them out at volume. That’s most applications today. An agent is more like a personal chef who asks: what do you want to eat? They might need entirely different ingredients, utensils, or techniques each time. You can’t run a personal-chef service out of the same kitchen setup you’d use for a restaurant.”

Cloudflare’s angle is that they are providing the infrastructure to support the needs of billions of agents representing billions of humans. But that is not the part that concerns SEO. The part that concerns digital marketing is that the moment when search transforms into an “agent manager” is here, right now.

WordPress 7.0

Content management systems are rapidly adapting to this change. It’s very difficult to overstate the importance of the soon-to-be-released WordPress 7.0, as it is jam-packed with the capability to connect to AI systems that will enable the internet transition from a human-centered web to an increasingly agentic-centered web.

The current internet is built for human interaction. Agents are operating within that structure, but that’s going to change very fast. The search marketing community really needs to wrap its collective mind around this change and to really understand how content management systems fit into that picture.

What Sources Do The Agents Trust?

Search marketing professional Mike Stewart recently posted on Facebook about this change, reflecting on what it means to him.

He wrote:

“I let Claude take over my computer.
Not metaphorically — it moved my mouse, opened apps, and completed tasks on its own.
That’s when something clicked…
This isn’t just AI assisting anymore.
This is AI operating on your behalf.

Google’s CEO is already talking about “agentic search” — where AI doesn’t just return results, it manages the process.
So the real questions become:
👉 Who controls the journey?
👉 What sources does the agent trust?
👉 Where does your business show up in that decision layer?
Because you don’t get “agentic search” without the ecosystem feeding it — websites, content, businesses.

That part isn’t going away. But it is being abstracted.”

Task-Based Agentic Search

I think the part that I guess we need to wrap our heads around is that humans are still making the decision to click the “make the reservation” button, and at some point, at least at the B2B layer, making purchases will increasingly become automated.

I still have my doubts about the complete automation of shopping. It feels unnatural, but it’s easy to see that the day may rapidly be approaching when, instead of writing a shopping list, a person will just tell an AI agent to talk to the local grocery store AI agent to identify which one has the items in stock at the best price, dump it into a shopping cart, and show it to the human, who then approves it.

The big takeaway is that the web may be transitioning to the “everyone has a personal chef” model, and that’s a potentially scary level of personalization. How does an SEO optimize for that? I think that’s where WordPress 7.0 comes in, as well as any other content management systems that are agentic-web ready.

Featured Image by Shutterstock/Stock-Asso

https://www.searchenginejournal.com/googles-task-based-search/571800/




How AI Chooses Which Brands To Recommend: From Relational Knowledge To Topical Presence via @sejournal, @Dixon_Jones

Ask ChatGPT or Claude to recommend a product in your market. If your brand does not appear, you have a problem that no amount of keyword optimization will fix.

Most SEO professionals, when faced with this, immediately think about content. More pages, more keywords, better on-page signals. But the reason your brand is absent from an AI recommendation may have nothing to do with pages or keywords. It has to do with something called relational knowledge, and a 2019 research paper that most marketers have never heard of.

The Paper Most Marketers Missed

In September 2019, Fabio Petroni and colleagues at Facebook AI Research and University College London published “Language Models as Knowledge Bases?” at EMNLP, one of the top conferences in natural language processing.

Their question was straightforward: Does a pretrained language model like BERT actually store factual knowledge in its weights? Not linguistic patterns or grammar rules, but facts about the world. Things like “Dante was born in Florence” or “iPod Touch is produced by Apple.”

To test this, they built a probe called LAMA (LAnguage Model Analysis). They took known facts, thousands of them drawn from Wikidata, ConceptNet, and SQuAD, and converted each one into a fill-in-the-blank statement. “Dante was born in ___.” Then they asked BERT to predict the missing word.

BERT, without any fine-tuning, recalled factual knowledge at a level competitive with a purpose-built knowledge base. That knowledge base had been constructed using a supervised relation extraction system with an oracle-based entity linker, meaning it had direct access to the sentences containing the answers. A language model that had simply read a lot of text performed nearly as well.

The model was not searching for answers. It had absorbed associations between entities and concepts during training, and those associations were retrievable. BERT had built an internal map of how things in the world relate to each other.

After this, the research community started taking seriously the idea that language models work as knowledge stores, not merely as pattern-matching engines.

What “Relational Knowledge” Means

Petroni tested what he and others called relational knowledge: facts expressed as a triple of subject, relation, and object. For example: (Dante, [born-in], Florence). (Kenya, [diplomatic-relations-with], Uganda). (iPod Touch, [produced-by], Apple).

What makes this interesting for brand visibility (and AIO) is that Petroni’s team discovered that the model’s ability to recall a fact depends heavily on the structural type of the relationship. They identified three types, and the accuracy differences between them were large.

1-To-1 Relations: One Subject, One Object

These are unambiguous facts. “The capital of Japan is ___.” There is one answer: Tokyo. Every time the model encountered Japan and capital in the training data, the same object appeared. The association built up cleanly over repeated exposure.

BERT got these right 74.5% of the time, which is high for a model that was never explicitly trained to answer factual questions.

N-To-1 Relations: Many Subjects, One Object

Here, many different subjects share the same object. “The official language of Mauritius is ___.” The answer is English, but English is also the answer for dozens of other countries. The model has seen the pattern (country → official language → English) many times, so it knows the shape of the answer well. But it sometimes defaults to the most statistically common object rather than the correct one for that specific subject.

Accuracy dropped to around 34%. The model knows the category but gets confused within it.

N-To-M Relations: Many Subjects, Many Objects

This is where things get messy. “Patrick Oboya plays in position ___.” A single footballer might play midfielder, forward, or winger depending on context. And many different footballers share each of those positions. The mapping is loose in both directions.

BERT’s accuracy here was only about 24%. The model typically predicts something of the correct type (it will say a position, not a city), but it cannot commit to a specific answer because the training data contains too many competing signals.

I find this super useful because it maps directly onto what happens when an AI tries to recommend a brand. Brands (without monopolies) operate in a “many-to-many” relationship. So “Recommend a [Brand] with a [feature]” is one of the hardest things for AI to “predict” with consistency. I will come back to that…

What Has Happened Since 2019

Petroni’s paper established that language models store relational knowledge. The obvious next question was: where, exactly?

In 2022, Damai Dai and colleagues at Microsoft Research published “Knowledge Neurons in Pretrained Transformers” at ACL. They introduced a method to locate specific neurons in BERT’s feed-forward layers that are responsible for expressing specific facts. When they activated these “knowledge neurons,” the model’s probability of producing the correct fact increased by an average of 31%. When they suppressed them, it dropped by 29%.

OMG! This is not a metaphor. Factual associations are encoded in identifiable neurons within the model. You can find them, and you can change them.

Later that year, Kevin Meng and colleagues at MIT published “Locating and Editing Factual Associations in GPT” at NeurIPS. This took the same ideas and applied them to GPT-style models, which is the architecture behind ChatGPT, Claude, and the AI assistants that buyers actually use when they ask for recommendations. Meng’s team found they could pinpoint the specific components inside GPT that activate when the model recalls a fact about a subject.

More importantly, they could change those facts. They could edit what the model “believes” about an entity without retraining the whole system.

That finding matters for SEOs. If the associations inside these models were fixed and permanent, there would be nothing to optimize for. But they are not fixed. They are shaped by what the model absorbed during training, and they shift when the model is retrained on new data. The web content, the technical documentation, the community discussions, the analyst reports that exist when the next training run happens will determine which brands the model associates with which topics.

So, the progress from 2019 to 2022 looks like this. Petroni showed that models store relational knowledge. Dai showed where it is stored. Meng showed it can be changed. That last point is the one that should matter most to anyone trying to influence how AI recommends brands.

What This Means For Brands In AI Search

Let me translate Petroni’s three relation types into brand positioning scenarios.

The 1-To-1 Brand: Tight Association

Think of Stripe and online payments. The association is specific and consistently reinforced across the web. Developer documentation, fintech discussions, startup advice columns, integration guides: They all connect Stripe to the same concept. When someone asks an AI, “What is the best payment processing platform for developers?” the model retrieves Stripe with high confidence, because the relational link is unambiguous.

This is Petroni’s 1-to-1 dynamic. Strong signal, no competing noise.

The N-To-1 Brand: Lost In The Category

Now consider being one of 15 cybersecurity vendors associated with “endpoint protection.” The model knows the category well. It has seen thousands of discussions about endpoint protection. But when asked to recommend a specific vendor, it defaults to whichever brand has the strongest association signal. Usually, that is the one most discussed in authoritative contexts: analyst reports, technical forums, standards documentation.

If your brand is present in the conversation but not differentiated, you are in an N-to-1 situation. The model might mention you occasionally, but it will tend to retrieve the brand with the strongest association instead.

The N-To-M Brand: Everywhere And Nowhere

This is the hardest position. A large enterprise software company operating across cloud infrastructure, consulting, databases, and hardware has associations with many topics, but each of those topics is also associated with many competitors. The associations are loose in both directions.

The result is what Petroni observed with N-to-M relations: The model produces something of the correct type but cannot commit to a specific answer. The brand appears occasionally in AI recommendations but never reliably for any specific query.

I see this pattern frequently when working with enterprise brands. They have invested heavily in content across many topics, but have not built the kind of concentrated, reinforced associations that the model needs to retrieve them with confidence for any single one.

Measuring The Gap

If you accept the premise, and the research supports it, that AI recommendations are driven by relational associations stored in the model’s weights, then the practical question is: Can you measure where your brand sits in that landscape?

AI Share of Voice is the metric most teams start with. It tells you how often your brand appears in AI-generated responses. That is useful, but it is a score without a diagnosis. Knowing your Share of Voice is 8% does not tell you why it is 8%, or which specific topics are keeping you out of the recommendations where you should appear.

Two brands can have identical Share of Voice scores for completely different structural reasons. One might be broadly associated with many topics but weakly on each. Another might be deeply associated with two topics but invisible everywhere else. These are different problems requiring different strategies.

This is the gap that a metric called AI Topical Presence, developed by Waikay, is designed to address. Rather than measuring whether you appear, it measures what the AI associates you with, and what it does not. [Disclosure: I am the CEO of Waikay]

Topical Presence is a way to measure Relational Knowledge
Topical Presence is as important as Share of Voice (Image from author, March 2026)

The metric captures three dimensions. Depth measures how strongly the AI connects your brand to relevant topics, weighted by importance. Breadth measures how many of the core commercial topics in your market the AI associates with your brand. Concentration measures how evenly those associations are distributed, using a Herfindahl-Hirschman Index borrowed from competition economics.

A brand with high depth but low breadth is known well for a few things but invisible for many others. A brand with wide coverage but high concentration is fragile: One model update could change its visibility significantly. The component breakdown tells you which problem you have and which lever to pull.

In the chart above, we start to see how different brands are really competing with each other in a way we have not been able to see before. For example, Inlinks is competing much more closely with a product called Neuronwriter than previously understood. Neuronwriter has less share of voice (I probably helped them by writing this article… oops!), but they have a better topical presence around the prompt, “What are the best semantic SEO tools?” So all things being equal, a bit of marketing is all they need to take Inlinks. This, of course, assumes that Inlinks stands still. It won’t. By contrast, the threat of Ahrefs is ever-present, but by being a full-service offering, they have to spread their “share of voice” across all of their product offerings. So while their topical presence is high, the brand is not the natural choice for an LLM to choose for this prompt.

This connects back to Petroni’s framework. If your brand is in a 1-to-1 position for some topics but absent from others, topical presence shows you where the gaps are. If you are in an N-to-1 or N-to-M situation, it helps you identify which associations need strengthening and which topics competitors have already built dominant positions on.

From Ranking Pages To Building Associations

For 25 years, SEO has been about ranking pages. PageRank itself was a page-level algorithm; the clue was always in the name (IYKYK … No need to correct me…). Even as Google moved towards entities and knowledge graphs, the practical work of SEO remained rooted in keywords, links, and on-page optimization.

AI visibility requires something different. The models that generate brand recommendations are retrieving associations built during training, formed from patterns of co-occurrence across many contexts. A brand that publishes 500 blog posts about “zero trust” will not build the same association strength as a brand that appears in NIST documentation, peer discussions, analyst reports, and technical integrations.

This is fantastic news for brands that do good work in their markets. Content volume alone does not create strong relational associations. The model’s training process works as a quality filter: It learns from patterns across the entire corpus, not from any single page. A brand with real expertise, discussed across many contexts by many voices, will build stronger associations than a brand that simply publishes more.

The question to ask is not “Do we have a page about this topic?” It is: “If someone read everything the AI has absorbed about this topic, would our brand come across as a credible participant in the conversation?”

That is a harder question. But the research that began with Petroni’s fill-in-the-blank tests in 2019 has given us enough understanding of the mechanism to measure it. And what you can measure, you can improve.

More Resources:


Featured Image: SvetaZi/Shutterstock

https://www.searchenginejournal.com/relational-knowledge-topical-presence-how-ai-chooses-which-brands-to-recommend/570482/




How To Do Evergreen Content In 2026 (And Beyond)

Fair to say the majority of evergreen content will not drive the value it did five years ago. Hell, even one or two years ago. What we have done for the last decade will not be as profitable.

AIOs have eroded clicks. Answer engines have given people options. And to be fair, people are bored of the +2,000-word article answering “What time does X start?” Or recipes where the ingredient list is hidden below 1,500 words about why daddy didn’t like me.

In response to this, publishers say it will be important to focus on more original investigations and less on things like evergreen content (-32 percentage points).

So, you’ve got to be smart. This has to be framed as a commercial decision. Content needs to drive real business value. You’ve got to be confident in it delivering.

That doesn’t mean every article, video, or podcast has to drive a subscription or direct conversion. But it needs to play a clear part in the user’s journey. You need to be able to argue for its inclusion:

  • Is it a jumping-off point?
  • Will it drive a registration?
  • Or a free subscriber, save or follow on social

More commonly known as micro-conversions, these things really matter when it comes to cultivating and retaining an audience. People don’t want more bland, banal nonsense. They want something better.

The antithesis to AI slop will help your business be profitable.

Inherently, nothing. It’s a foundational part of the content pyramid.

In most cases, it’s been done to death, and AI is very effective at summarizing a lot of this bread-and-butter content.

Over the last 10 years, it’s been pretty easy to build a strategy around evergreen content, particularly if you go down the parasite SEO route. Remember Forbes’ Advisor and the great affiliate cull?

The epitome of quantity over quality; it worked and made a fortune.

But I digress.

An authoritative enough site has been able to drive clicks and follow-up value with sub-par content for decades. That is, slowly diminishing. Rightly or wrongly.

And not because of the Helpful Content stuff. Google nerfed all the small sites long before the goliaths. Now they’ve gone after the big fish.

We have to make commercial decisions that help businesses make the right choice. Concepts like E-E-A-T have had an impact on the quality of content (a good thing). It’s also had an impact on the cost of creating quality content.

  • Working with experts.
  • Unique imagery.
  • Video.
  • Product and development costs.
  • Data.

This isn’t cheap. Once upon a time, we could generate value from authorless content full of stock images and no unique value. Unless you’re willing to bend the rules (which isn’t an option for most of us), you need an updated plan.

It depends.

You need to establish how much your content now costs to produce and the value it brings. Not everything is going to drive a significant conversion. That doesn’t mean you shouldn’t do it. It means you need to have a very clear reason for what you’re creating and why.

If particular topics are essential to your audience, service, and/or product, then they should at least be investigated.

One of the joys of creating evergreen content has always been that it adds value throughout the year(s). A couple of annual updates, even relatively light touch, could yield big results.

Commissioning something of quality in this space is likely more expensive. It needs to be worth it; it has to form part of your multi-channel experience to make it so.

  • Unique data and visuals that can be shared on socials.
  • Building campaigns around it (or it’s part of a campaign).
  • You can even build authors and your brand around it.
  • And if it resonates, you can rinse and repeat year after year.
Ahrefs created demand for their brand + an evergreen topic – AIOs (Image Credit: Harry Clarkson-Bennett)

And this type of content or campaign can increase demand for a topic. You can become a thought leader by shifting the tide of public opinion.

For publishers and content creators, that is foundational.

Two broadly rhetorical questions:

  1. Do you think in a world of zero click searches, clicks and reach are sensible tier one goals?
  2. Do you want to be targeted against a metric that is very likely to go down each year?
Like it or not, people really do use AIOs (Image Credit: Harry Clarkson-Bennett)

I don’t – on both counts. We should want to be targeted on driving real value for the business.

Something like:

  1. Tier 1: Value – core, revenue, and value-driving conversions.
  2. Tier 2: Registrations (and things that help you build your owned properties), links, shares, and comments.
  3. Tier 3: Page views, returning visits, and engagement metrics.

Micro-conversions over clicks. We’re focusing on registrations, free or lower-value subscriptions. Whatever gets the user into the ecosystem and one step closer to a genuinely valuable conversion.

The messy middle has changed, and it is largely unattributable (Image Credit: Harry Clarkson-Bennett)

Now, could a click be a micro-conversion? If you know that someone who reads a secondary article (by clicking a follow-up link) is 10x more likely to register, that follow-up click could be a sensible micro-conversion.

This type of conversion may not directly drive your bottom line. But it forces you and your team to focus on behaviors that are more likely to lead to a valuable conversion.

That is the point of a micro-conversion. It changes behaviors.

You can tweak the above tiers to better suit your content offering. Not all content is going to drive direct tier one or even two value. You just need to have a very clear idea of its purpose in the customer journey.

If what you’re creating already exists, you’d better make sure you add something extra. You’ve got to force your way into the conversation, and unless you can offer something unique, you’re (almost certainly) wasting your time IMO.

I’ll break all of these down, but I think (in order of importance):

  1. Writing content for people.
  2. Information gain.
  3. Getting it found.
  4. Creating it at the right time.
  5. Structuring it for bots.

Everyone is obsessed with getting cited or being visible in AI.

I think this is completely the wrong way of framing this new era. Getting cited there, or being visible, is a happy byproduct of building a quality brand with an efficient, joined-up approach to marketing.

The more you understand your audience, the more likely you will be to create high-quality, relevant content that gets cited.

If you know your audience really cares about a topic, that’s step one taken care of. If you know where they spend time and how they’re influenced, that’s step two. And if you know how to cut through the noise, that’s step three.

Really, this is an evolution in SEO and the internet at large.

  • Invest in and create content that will resonate with your audience.
  • Create a cross-channel marketing strategy that will genuinely reach and influence them.
  • Share, share, share. Be impactful. Get out there.
  • Make sure it’s easy to read, share, and consume.

Your content still needs to reach and be remembered by the right people. Do that better than anybody else, and wider visibility will come.

In SEO, we have a different definition of information gain than more traditional information retrieval mechanics. I don’t know if that’s because we’re wrong (probably), or that we have a valid reason…

Maybe someone can enlighten me?

In more traditional machine learning, information gain measures how much uncertainty is reduced after observing new data. That uncertainty is captured by entropy, which is a way of quantifying how unpredictable a variable is based on its probability distribution.

Events with low probability are more surprising and therefore carry more information. High probability events are less surprising and novel. Therefore, entropy reflects the overall level of disorder and unpredictability across all possible outcomes.

Information gain, then, tells us how much that unpredictability drops when we split or segment the data. A higher information gain means the data has become more ordered and less uncertain – in other words, we’ve learned something useful.

To us in SEO, information gain means the addition of new, relevant information. Beyond what is already out there in the wider corpus.

A representative workflow of Google’s Contextual estimation of link information gain patent (Image Credit: Harry Clarkson-Bennett)

Google wants to reduce uncertainty. Reduce ambiguity. Content with a higher level of information gain isn’t only different, it elevates a user’s understanding. It raises the bar by answering the question(s) and topic more effectively than anyone else.

So, try something different, novel even, and watch Google test your content higher up in the SERPs to see if it satisfies a user.

This is such an important concept for evergreen content because so many of these queries have well-established answers. If you’re just parroting these answers because your competitors do it, you’re not forcing Google’s hand.

Particularly if you’re still just copying headers and FAQs from the top three results. Audiences are not arriving at publisher destinations through direct navigation at the same scale. They encounter journalism incidentally, through social feeds, not through habitual site visits.

Younger audiences spend less time on news sites and more time on social every year (Image Credit: Harry Clarkson-Bennett)

You’ve got to meet them there and force their hand.

According to this patent – contextual estimation of link information gain – Google scores documents based on the additional information they offer to a user, considering what the user has already seen.

“Based on the information gain scores of a set of documents, the documents can be provided to the user in a manner that reflects the likely information gain that can be attained by the user if the user were to view the documents.”

Bots, like people, need structure to properly “understand” content.

Elements like headings (h1 – h6), semantic HTML, and linking effectively between articles help search engines (and other forms of information retrieval) understand what content you deem important.

While the majority of semi-literates “understand” content, bots don’t. They fake it. They use engagement signals, NLP, and the vector model space to map your document against others.

They can only do this effectively if you understand how to structure a page.

  • Frontloading key information.
  • Effectively targeting highly relevant queries.
  • Using structured data formats like lists and tables, where appropriate (these are more cost-effective forms of tokenization).
  • Internal and external links.
  • Increasing contextual knowledge gain with multimedia (yes, Google can interpret them).

The more clearly a page communicates its topic, subtopics, and relationships, the more likely it is to be consistently retrieved and reused across search and AI surfaces. This has a compounding effect.

Rank more effectively (great for RAG, obviously) – feature more heavily in versions of the internet – force your way into model training data.

If you need to get development work put through, frame it through the lens of assistive technology. Can people with specific needs fully access your pages?

As up to 20% need some kind of digital assistive technology, this becomes a ‘ranking factor’ of sorts.

I won’t go through this in much detail, as I’ve written a really detailed post on it. Basically:

  • Track and pay very close attention to spikes in demand (Google Trends API being a very obvious option here).
  • Make sure you’re adding something of value to the wider corpus.
  • If quality content is already out there and you have nothing extra to add, consider whether it’s worth spending money on (SEO is not free).
Create and update timely evergreen content (Image Credit: Harry Clarkson-Bennett)

While this is primarily for news, you can apply a similar logic to evergreen content if you zoom out and follow macro trends.

Evergreen content still spikes at different times throughout the year. Take Spain as an example. There’s much more limited interest in going to Spain in the Winter months from the UK. But January (holiday planning or weekend breaks) and summer (more immediate holiday-ing with the kids) provide better opportunities to generate traffic.

You’re capturing the spike in demand by updating content at the right time. Particularly if you understand the difference in user needs when this spike in demand happens.

  • In January, get your holiday planning content ready.
  • In the summer, get your family-friendly and last-minute holiday content up and running.
Image Credit: Harry Clarkson-Bennett

Demand for evergreen topics can be cyclical. In this example, you would want to capture the spike(s) with carefully planned updates, so you have up-to-date content when a user is really searching for that product, service, or information.

Well, what matters to your brand and your users? Have you asked them?

By the very nature of new and evolving topics and concepts, not everything “evergreen” has been done.

New topics rise. Old ones fall. Some are cyclical.

My rule(s) of thumb would be to establish:

  • Is the topic foundational to your product and service?
  • Does your current (and potential) audience demand it?
  • Do you have something new to add to the wider corpus of information?

If the answer to those three is a broad variation of yes, it’s almost certainly a good bet. Then, I would consider topic search volume, cross-platform demand, and whether the topic is trending up or down in popularity.

There are some things you should be doing “just for SEO.” Content isn’t one of them. You can yell topical authority until you’re blue in the face. If you’re creating stuff just for SEO – kill it.

IMO, these plays have been dead or dying for some time. The modern-day version of the internet (in particular search) demands disambiguation. It demands accuracy. Verification that you are an expert. Otherwise, you’re competing with those who have a level of legitimacy that you do not.

Social profiles, newsletters, real people sharing stories. You’re competing with people who aren’t polishing turds.

If all you’re thinking about is search volume or clicks, I don’t think it’s worth it.

YouTube and TikTok are flying. The young mind cannot escape big tech’s immeasurable evil.

They’re bored with reading the news, but they really, really like video. They will watch it.

TikTok and YouTube dominate (Image Credit: Harry Clarkson-Bennett)

The good news for you (and me) is that platforms like YouTube are still very viable opportunities to build something brilliant. Memorable even. They’re also far more AI-resilient – even if Google desperately tries to summarize everything with AI.

And this brings me nicely onto rented land. Platforms you don’t own.

We’ve spent years creating assets (your websites) to deliver value in search. Owning all of your assets and prioritizing your site above all else. But that is changing. In many cases, people don’t reach your website until they’ve already made a purchasing decision.

I think Rand has managed this transition better than anybody (Image Credit: Harry Clarkson-Bennett)

So, you have to get your stuff out there. Create large, unique studies. Cut them into snippets and short-form videos. Use your individual platform to boost your profile and the content’s chances of soaring.

This is, IMO, particularly prescient for publishers. You’ve got to get out there. You’ve got to share and reuse your content. To make the most of what you’ve created.

Sweat your assets. Even if senior figures aren’t comfortable with this, you need to make it happen.

People have been espousing how important it is to feature as part of the answer. And that may be true. But you’re going to have to be good at selling your projects in if there’s no clear attribution or value.

It might not have the spikes of news, but evergreen interest still spikes at certain times in the year.

Get people – real people – to share it. To have their spin on it.

Outperform the expected early stage engagement and maximize your chance of appearing in platforms like Discover with wider platform engagement.

You have to work harder than before.

I shared an example of this around a year ago, but to revisit it, I now have 11 recommendations from other Substacks.

You can’t do this alone (Image Credit: Harry Clarkson-Bennett)

They have accounted for over 40% of my total subscribers. Admittedly, mainly from Barry, Shelby, and Jessie. But they are, if I may be so bold, superhumans.

And when our main driver of evergreen traffic to the site (Google) has really leaned into the evil that surrounds big tech, we’ve got to be cannier. We have to find ways to get people to share our content.

Even evergreen content.

If we’re being honest, a lot of SEO content has been rubbish. Churned out muck.

People are still churning out muck at an incredible rate. When what you’ve got is crap, more crap isn’t the answer. I think people are turned off. They’re tuning out of things at an alarming rate, especially young people.

It is all about getting the right people into the system. Evergreen content is still foundational here. You just have to make it work harder. Be more interesting. Be shareable.

Hopefully, this makes decisions over what we should and shouldn’t create easier.

More Resources:


Read Leadership In SEO. Subscribe now.


Featured Image: str.nk/Shutterstock

https://www.searchenginejournal.com/how-to-do-evergreen-content-in-2026-and-beyond/570903/




Who Owns SEO In The Enterprise? The Accountability Gap That Kills Performance via @sejournal, @billhunt

Enterprise SEO doesn’t fail because teams don’t care, lack expertise, or miss tactics. It fails because ownership is fractured.

In most large organizations, everyone controls a piece of SEO, yet no single group owns the outcome. Visibility, traffic, and discoverability depend on dozens of upstream decisions made across engineering, content, product, UX, legal, and local markets. SEO is measured on the result, but it does not control the system that produces it.

In smaller organizations, this problem is manageable. SEO teams can directly influence content, technical decisions, and site structure. In the enterprise, that control dissolves. Incentives diverge. Workflows fragment. Coordination becomes optional.

SEO success requires alignment, but enterprise structures reward isolation. That mismatch creates what I call the accountability gap – the silent failure mode behind most large-scale SEO underperformance.

SEO Is Measured By The Team That Doesn’t Control It

SEO is the only business function I am aware of that, judged by performance, cannot be delivered independently. This is especially true in the enterprise, where SEO performance is evaluated using familiar metrics: visibility, traffic, engagement, and increasingly AI-driven exposure. The irony is that the SEO function rarely controls the systems that generate those outcomes.

Function Controls SEO Dependency
Development Templates, rendering, performance Crawlability, indexability, structured data
Content Teams Messaging, depth, updates Relevance, coverage, AI eligibility
Product Teams Taxonomy, categorization, naming Entity clarity, internal structure
UX & Design Navigation, layout, hierarchy Discoverability, user engagement
Legal & Compliance Claims, restrictions Content completeness & trust signals
Local Markets Localization & regional content Cross-market consistency & intent alignment

SEO depends on all of these departments to do their job in an SEO-friendly manner for it to have a remote chance of success. This makes SEO unusual among business functions. It is judged by performance, yet it cannot deliver that performance independently. And because SEO typically sits downstream in the organization, it must request changes rather than direct them.

That structural imbalance is not a process issue. It is an ownership problem.

The Accountability Gap Explained

The accountability gap appears whenever a business-critical outcome depends on multiple teams, but no single team is accountable for the result.

SEO is a textbook example as fundamental search success requires development to implement correctly, content to align with demand, product teams to structure information coherently, markets to maintain consistency, and legal to permit eligibility-supporting claims. Failure occurs when even one link breaks.

Inside the enterprise, each of those teams is measured on its own key performance indicators. Development is rewarded for shipping. Content is rewarded for brand alignment. Product is rewarded for features. Legal is rewarded for risk avoidance. Markets are rewarded for local revenue. SEO lives in the cracks between them.

No one is incentivized to fix a problem that primarily benefits another department’s metrics. So issues persist, not because they are invisible, but because resolving them offers no local reward.

KPI Structures Encourage Metric Shielding

This is where enterprise SEO collides head-on with organizational design.

In practice, resistance to SEO rarely looks like resistance. No one says, “We don’t care about search.” Instead, objections arrive wrapped in perfectly reasonable justifications, each grounded in a different team’s success metrics.

Engineering teams explain that template changes would disrupt sprint commitments. Localization teams point to budgets that were never allocated for rewriting content. Product teams note that naming decisions are locked for brand consistency. Legal teams flag risk exposure in expanded explanations. And once something has launched, the implicit assumption is that SEO can address any fallout afterward.

Each of these responses makes sense on its own. None are malicious. But together, they form a pattern where protecting local KPIs takes precedence over shared outcomes.

This is what I refer to as metric shielding: the quiet use of internal performance measures to avoid cross-functional work. It’s not a refusal to help; it’s a rational response to how teams are evaluated. Fixing an SEO issue rarely improves the metric a given department is rewarded for, even if it materially improves enterprise visibility.

Over time, this behavior compounds. Problems persist not because they are unsolvable, but because solving them benefits someone else’s scorecard. SEO becomes the connective tissue between teams, yet no one is incentivized to strengthen it.

This dynamic is part of a broader organizational failure mode I call the KPI trap, where teams optimize for local success while undermining shared results. In enterprise SEO, the consequences surface quickly and visibly. In other parts of the organization, the damage often stays hidden until performance breaks somewhere far downstream.

The Myth: “SEO Is Marketing’s Job”

To simplify ownership, enterprises often default to a convenient fiction: SEO belongs to marketing.

On the surface, that assumption feels logical. SEO is commonly associated with organic traffic, and organic traffic is typically tracked as a marketing KPI. When visibility is measured in visits, conversions, or demand generation, it’s easy to conclude that SEO is simply another marketing lever.

In practice, that logic collapses almost immediately. Marketing may influence messaging and campaigns, but it does not control the systems that determine discoverability. It does not own templates, rendering logic, taxonomy, structured data pipelines, localization standards, release timing, or engineering priorities. Those decisions live elsewhere, often far upstream from where SEO performance is measured.

As a result, marketing ends up owning SEO on the organizational chart, while other teams own SEO in reality. This creates a familiar enterprise paradox. One group is held accountable for outcomes, while other groups control the inputs that shape those outcomes. Accountability without authority is not ownership. It is a guaranteed failure pattern.

The Core Reality

At its core, enterprise SEO failures are rarely tactical. They are structural, driven by accountability without authority across systems SEO does not control.

Search performance is created upstream through platform decisions, information architecture, content governance, and release processes. Yet SEO is almost always measured downstream, after those decisions are already locked. That separation creates the accountability gap.

SEO becomes responsible for outcomes shaped by systems it doesn’t control, priorities it can’t override, and tradeoffs it isn’t empowered to resolve. When success requires multiple departments to change, and no one owns the outcome, performance stalls by design.

Why This Breaks Faster In AI Search

In traditional SEO, the accountability gap usually expressed itself as volatility. Rankings moved. Traffic dipped. Teams debated causes, made adjustments, and over time, many issues could be corrected. Search engines recalculated signals, pages were reindexed, and recovery, while frustrating, was often possible. AI-driven search behaves differently because the evaluation model has changed.

AI systems are not simply ranking pages against each other. They are deciding which sources are eligible to be retrieved, synthesized, and represented at all. That decision depends on whether the system can form a coherent, trustworthy understanding of a brand across structure, entities, relationships, and coverage. Those signals must align across platforms, templates, content, and governance.

This is where the accountability gap becomes fatal. When even one department blocks or weakens those elements – by fragmenting entities, constraining content, breaking templates, or enforcing inconsistent standards – the system doesn’t partially reward the brand. It fails to form a stable representation. And when representation fails, exclusion follows. Visibility doesn’t gradually decline. It disappears.

AI systems default to sources that are structurally coherent and consistently reinforced. Competitors with cleaner governance and clearer ownership become the reference point, even if their content is not objectively better. Once those narratives are established, they persist. AI systems are far less forgiving than traditional rankings, and far slower to revise once an interpretation hardens.

This is why the accountability gap now manifests as a visibility gap. What used to be recoverable through iteration is now lost through omission. And the longer ownership remains fragmented, the harder that loss is to reverse.

A Note On GEO, AIO, And The Labeling Distraction

Much of the current conversation reframes these challenges under new labels GEO, AIO, AI SEO, generative optimization. The terminology isn’t wrong. It’s just incomplete.

These labels describe where visibility appears, not why it succeeds or fails. Whether the surface is a ranking, an AI Overview, or a synthesized answer, the underlying requirements remain unchanged: structural clarity, entity consistency, governed content, trustworthy signals, and cross-functional execution.

Renaming the outcome does not change the operating model required to achieve it.

Organizations don’t fail in AI search because they picked the wrong acronym. They fail because the same accountability gap persists, with faster and less forgiving consequences.

The Enterprise SEO Ownership Paradox

At its core, enterprise SEO operates under a paradox that most organizations never explicitly confront.

SEO is inherently cross-functional. Its performance depends on systems, processes, platforms, and decisions that span development, content, product, legal, localization, and governance. It behaves like infrastructure, not a channel. And yet, it is still managed as if it were a marketing function, a reporting line, or a service desk that reacts to requests.

That mismatch explains why even well-funded SEO teams struggle. They are held responsible for outcomes created by systems they do not control, processes they cannot enforce, and decisions they are rarely empowered to shape.

This paradox stays abstract until it’s reduced to a single, uncomfortable question:

Who is accountable when SEO success requires coordinated changes across three departments?

In most enterprises, the honest answer is simple. No one.

And when no one owns cross-functional success, initiatives stall by design. SEO becomes everyone’s dependency and no one’s priority. Work continues, meetings multiply, and reports are produced – but the underlying system never changes.

That is not a failure of execution. It is a failure of ownership.

What Real Ownership Looks Like

Organizations that win redefine SEO ownership as an operational capability, not a departmental role.

They establish executive sponsorship for search visibility, shared accountability across development, content, and product, and mandatory requirements embedded into platforms and workflows. Governance replaces persuasion. Standards are enforced before launch, not debated afterward.

SEO shifts from requesting fixes to defining requirements teams must follow. Ownership becomes structural, not symbolic.

The Final Reality

This perspective isn’t theoretical. It’s grounded in my nearly 30 years of direct experience designing, repairing, and operating enterprise website search programs across large organizations, regulated industries, complex platforms, and multi-market deployments.

I’ve sat in escalation meetings where launches were declared successful internally, only for visibility to quietly erode once systems and signals reached the outside world. I’ve watched SEO teams inherit outcomes created months earlier by decisions they were never part of. And more recently, I’ve worked with leadership teams who didn’t realize they had a search problem until AI-driven systems stopped citing them altogether. These are not edge cases. They are repeatable organizational failure modes.

What ultimately separated failure from recovery was never better tactics, better tools, or better acronyms. It was ownership. Specifically, whether the organization recognized search as a shared system-level responsibility and structured itself accordingly.

Enterprise SEO doesn’t break because teams aren’t trying hard enough. It breaks when accountability is assigned without authority, and when no one owns the outcomes that require coordination across the organization.

That is the problem modern search exposes. And ownership is the only durable fix.

Coming Next

The Modern SEO Center Of Excellence: Governance, Not Guidelines

We’ll close the loop by showing how enterprises institutionalize ownership through a Center of Excellence that governs standards, enforcement, entity governance, and cross-market consistency, the missing layer that prevents the accountability gap from recurring.

More Resources:


Featured Image: ImageFlow/Shutterstock

https://www.searchenginejournal.com/who-owns-seo-in-the-enterprise-the-accountability-gap-that-kills-performance/566095/




Google Answers Why Core Updates Can Roll Out In Stages via @sejournal, @martinibuster

Google’s John Mueller responded to a question about whether core updates roll out in stages or follow a fixed sequence. His answer offers some clarity about how core updates are rolled out and also about what some core updates actually are.

Question About Core Update Timing And Volatility

An SEO asked on Bluesky whether core updates behave like a single rollout that is then refined over time or if the different parts being updated are rolled out at different stages.

The question reflects a common observation that rankings tend to shift in waves during a rollout period, often lasting several weeks. This has led to speculation that updates may be deployed incrementally rather than all at once.

They asked:

“Given the timing, I want to ask a core update related question. Usually, we see waves of volatility throughout the 2-3 weeks of a rollout. Broadly, are different parts of core updated at different times? Or is it all reset at the beginning then iterated depending on the results?”

Core Updates Can Require Step-By-Step Deployment

Mueller explained that Google does not formally define or announce stages for core updates. He noted that these updates involve broad changes across multiple systems, which can require a step-by-step rollout rather than a single deployment.

He responded:

“We generally don’t announce “stages” of core updates.. Since these are significant, broad changes to our search algorithms and systems, sometimes they have to work step-by-step, rather than all at one time. (It’s also why they can take a while to be fully live.)”

Updates Depend On Systems And Teams Involved

Mueller next added that there is no single mechanism that governs how all core updates are released. Instead, updates reflect the work of different teams and systems, which can vary from one update to another.

He explained:

“I guess in short there’s not a single “core update machine” that’s clicked on (every update has the same flow), but rather we make the changes based on what the teams have been working on, and those systems & components can change from time to time.”

Core Updates May Roll Out Incrementally Rather Than All At Once

Mueller’s explanation suggests that the waves of volatility observed during core updates may correspond to incremental changes across different systems rather than a single reset followed by adjustments. Because updates are tied to multiple components, the rollout may progress in parts as those systems are updated and brought fully live.

This reflects a process where some changes are complex and require a more nuanced step-by-step rollout, rather than being released all at once, which may explain why ranking shifts can appear uneven during the rollout period.

Connection To Google’s Spam Update?

I don’t think that it was a coincidence that the March Core update followed closely after the recent March 2026 Spam Update. The reason I think that is because it’s logical for spam fighting to be a part of the bundle of changes made in a core algorithm update. That’s why Googlers sometimes say that a core update should surface more relevant content and less of the content that’s low quality.

So when Google announces a Spam Update, that stands out because either Google is making a major change to the infrastructure that Google’s core algorithm runs on or the spam update is meant to weed out specific forms of spam prior to rolling out a core algorithm update, to clear the table, so to speak. And that is what appears to have happened with the recent spam and core algorithm updates.

Comparison With Early Google Updates

Way back in the early days, around 25 years ago, Google used to have an update every month, offering a chance to see if new pages are indexed and ranked as well as seeing how existing pages are doing. The initial first days of the update saw widescale fluctuations which we (the members of WebmasterWorld forum) called the Google Dance.

Back then, it felt like updates were just Google adding more pages and re-ranking them. Then around the 2003 Florida update it became apparent that the actual ranking systems were being changed and the fluctuations could go on for months. That was probably the first time the SEO community noticed a different kind of update that was probably closer a core algorithm update.

In my opinion, one way to think of it is that Google’s indexing and ranking algorithms are like software. And then, there’s also hardware and software that are a part of the infrastructure that the indexing and ranking algorithms run on (like the operating system and hardware of your desktop or laptop).

That’s an oversimplification but it’s useful to me for visualizing what a core algorithm update might be. Most, if not all of it, is related to the indexing and ranking part. But I think sometimes there’s infrastructure-type changes going on that improve the indexing and ranking part.

Featured Image by Shutterstock/A9 STUDIO

https://www.searchenginejournal.com/google-answers-why-core-updates-can-roll-out-in-stages/571003/




The Science Of What AI Actually Rewards via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

In “The Science Of How AI Pays Attention,” I analyzed 1.2 million ChatGPT responses to understand exactly how AI reads a page. In “The Science Of How AI Picks Its Sources,” I analyzed 98,000 citation rows to understand which pages make it into the reading pool at all.

This is Part 3.

Where Part 1 told you where on a page AI looks, and Part 2 told you which pages AI routinely considers, this one tells you what AI actually rewards inside the content it reads.

The data clarifies:

  • Most AI SEO writing advice doesn’t hold at scale. There is no universal “write like this to get cited” formula – the signals that lift one industry’s citation rates can actively hurt another.
  • The entity types that predict citation are not the ones being targeted. DATE and NUMBER are universal positives. PRICE suppresses citation in five of six verticals, and KG-verified entities are a negative signal.
  • The one writing signal that holds across all seven verticals: Declarative language in your intro, +14% aggregate lift.
  • Heading structure is binary. Commit to the right number for your vertical or use none. Three to four headings are worse than zero in every vertical.
  • Corporate content dominates. Reddit doesn’t. AI citation behavior does not mirror what happened to organic search in 2023-2024.

1. Specific Writing Signals Influence Citation, While Others Harm It

While “The Science Of How AI Pays Attention” covers parts of the page and types of writing that influence ChatGPT visibility, I wanted to understand which writing-level signals – word count, structure, language style – predict higher AI citation rates across verticals.

Approach

  1. I compared high-cited pages (more than three unique prompt citations) vs. low-cited across seven writing metrics: word count, definitive language, hedging, list items, named entity density, and intro-specific signals.
  2. I analyzed the first 1,000 words for list item count, named entity density, intro definitive language token density, and intro number count.

Results: Across all verticals, definitive phrasing and including relevant entities matter. But most signals are flat.

Image Credit: Kevin Indig

What The Industry Patterns Showed

When splitting the data up by vertical, we suddenly see preferences:

  • Total word count was strongest in CRM/SaaS (1.59x).
  • Finance was an anomaly with word count: Shorter pages win (0.86x word count).
  • Definitive phrases in the first 1,000 characters were positive for most verticals.
  • Education is a signal void. Writing style explains almost nothing about citation likelihood there.
Image Credit: Kevin Indig

Top Takeaways

1. There is no universal “write like this to get cited” formula. For example, the signals that lift CRM/SaaS citation rates actively hurt Finance. Instead, match content format to vertical norms.

2. The one universal rule: open with a direct declarative statement. Not a question, not context-setting, not preamble. The form is “[X] is [Y]” or “[X] does [Z].” This is the only writing instruction that holds regardless of vertical, content type, or length.

3. LLMs “penalize” hedging in your intro. “This may help teams understand” performs worse than “Teams that do X see Y.” Remove qualifiers from your opening paragraph before any other optimization.

2. The Entity Types That Predict Citation Are Not The Ones Being Targeted

Most AEO advice focuses on named entities as a category: Pack in more known brand names, tool names, numbers. The cross-vertical entity type analysis below tells a more specific (and more useful) story.

Approach

  1. Ran Google’s Natural Language API on the first 1,000 characters (about 200-250 words) of each unique URL.
  2. Computed lift per entity type: % of high-cited pages with that type / % of low-cited pages.
  3. Analyzed 5,000 pages across seven verticals.

* A quick note on terminology: Google NLP classifies software products, apps, and SaaS tools as CONSUMER_GOOD, a legacy label from when the API was built for physical retail. Throughout this analysis, CONSUMER_GOOD means software/product entities.

Results: DATE and NUMBER are the most universal positive signals. Interestingly, PRICE is the strongest universal negative.

Image Credit: Kevin Indig
Image Credit: Kevin Indig

What The Industry Patterns Showed

  • DATE is the most universal positive signal, with the exception of Finance (0.65x).
  • NUMBER is the second most universal. Specific counts, metrics, and statistics in the intro consistently predict higher citation rates. Finance (0.98x) and Product Analytics (1.10x) mark the floor and ceiling of that range.
  • PRICE is the strongest universal negative. Pages that open with pricing signal commercial intent. Finance is the sole exception at 1.16x, likely because price here means fee percentages and rate comparisons, which are the actual reference data financial queries are looking for.
  • CONSUMER_GOOD (software/product entities) is mixed. In Healthcare, product entities signal established brands and tools. In Crypto, naming specific protocols and products is core to answering technical queries.
  • PHONE_NUMBER is a positive signal in Healthcare (1.41x) and Education (1.40x). In both cases, it is almost certainly a proxy for established brands/institutions/providers with real physical presence, not a literal signal to add phone numbers to your pages.

The Knowledge Graph inversion deserves its own note here:

  • The data showed that high-cited pages average 1.42 KG-verified entities vs. 1.75 for low-cited pages (lift: 0.81x).
  • Pages built around well-known, KG-verified entities (major brands, institutions, famous people) tend toward generic coverage, which isn’t preferred by ChatGPT.
  • High-cited pages are dense with specific, niche entities: a particular methodology, a precise statistic, a named comparison. Many of those niche entities have no KG entries at all. That specificity is what AI reaches for.

Top Takeaways

1. Add the publish date to your pages and aim to use at least one specific number in your content. That combination is the closest thing to a universal AI citation signal this dataset produced. But Finance gets there through price data and location specificity instead.

2. Avoid opening with pricing in non-finance verticals. Price-dominant intros correlate with lower citation rates.

3. KG presence and brand authority do not translate to an AI citation advantage. Chasing Wikipedia entries, brand panels, or KG verification is the wrong lever. Specific, niche entities (even ones without KG entries) outperform famous ones.

3. Heading Structure: Commit To One Or Don’t Bother

We know headings matter for citations from the previous two analyses. Next, I wanted to understand whether heading count predicts citation rates and whether the optimal structure varies by vertical.

Approach

  1. Counted total headings per page (H1+H2+H3) across all cited URLs.
  2. Grouped pages into 7 heading-count buckets: 0, 1-2, 3-4, 5-9, 10-19, 20-49, 50+.
  3. Computed high-cited rate (% of URLs that are high-cited) per bucket per vertical.

Results: Including more headings in your content is not universally better. The sweet spot depends on vertical and content type. One finding holds everywhere: Strangely, 3-4 headings are worse than zero.

Image Credit: Kevin Indig

What The Industry Patterns Showed

  • CRM/SaaS is the only vertical where the 20+ heading lift is confirmed: 12.7% high-cited rate at 20-49 headings vs. a 5.9% baseline. The 50+ bucket reaches 18.2%. Long structured reference pages and comparison guides with one section per tool outperform everything else here.
  • Healthcare inverts most sharply. The high-cited rate drops from 15.1% at zero headings to 2.5% at 20-49 headings. A page with 30 H2s on telehealth topics signals optimization intent, not clinical authority.
  • Finance peaks at 10-19 headings (29.4% high-cited rate). Structured but not exhaustive: think rate tables, regulatory breakdowns, and advisor comparison pages with moderate heading depth.
  • Crypto peaks at five to nine headings (34.7% high-cited rate). Technical documentation in this vertical tends toward dense prose with moderate navigation structure. Over-structuring breaks up the technical depth.
  • Education is flat across all heading counts, which is consistent with the writing signals finding. Heading structure explains almost nothing about citation likelihood in education content.
  • The three to four heading dead zone holds across every vertical without exception. Partial structure confuses AI navigation without providing the full benefit of a committed hierarchy.

Top Takeaways

1. The 20+ heading finding from Part 1 is a CRM/SaaS finding, not a universal one. Applying it to healthcare, education, or finance could actively suppress citation rates in those verticals.

2. The principle that holds everywhere: Commit to structure or don’t use it. The middle ground costs you in every vertical. A fully-structured page with the right heading depth outperforms a half-structured page in every vertical.

3. Use the optimal heading range for your vertical. Crypto: 5-9. Finance and Education: 10-19. CRM/SaaS: 20+ (with H3s). Healthcare: 0 or 5-9 at most. Long CRM reference pages with 50+ sections are the one case where maximum heading depth pays off.

4. UGC Doesn’t Dominate

The “Reddit effect” reshaped organic search between 2024 and 2025. I wanted to understand whether ChatGPT cites user-generated content (Reddit, forums, reviews) at meaningful rates or whether corporate/editorial content dominates.

The common industry assumption – that AI also preferentially cites community voices – is not what we found in the data.

Approach

  1. Classified these cited URLs as (1) UGC: Reddit, Quora, Stack Overflow, forum subdomains, Medium, Substack, Product Hunt, Tumblr, or (2) community/forum prefixes or corporate/editorial by domain.
  2. Computed citation share per category per vertical.
  3. Dataset: 98,217 citations across 7 verticals.

Results: Corporate content accounts for 94.7% of all citations. UGC is nearly invisible.

Image Credit: Kevin Indig

What The Industry Patterns Showed

  • Finance is the most corporate-locked vertical at 0.5% UGC. YMYL (Your Money, Your Life) content appears to systematically suppress citations to community opinion.
  • Healthcare sits at 1.8% UGC for the same structural reason. Clinical, telehealth, and HIPAA content draws almost exclusively from institutional sources.
  • Crypto has the highest UGC penetration in the dataset at 9.2%. Community-generated content (Reddit technical threads, Medium tutorials, developer forum posts) answers a meaningful proportion of analyzed queries. In a fast-moving technical niche where official documentation consistently lags, community posts fill the gap.
  • Product Analytics and HR Tech sit at 6.9% and 5.8% UGC. Both are verticals where Reddit comparison threads and product review communities provide genuine signal alongside corporate content.

Top Takeaways

1. The “Reddit effect” in SEO has not translated proportionally to AI citations. In most verticals, reddit.com captures 2-5% of total citations. This finding is in line with other industry research, including this report from Profound.

2. For finance and healthcare: UGC has near-zero AI citation value. Invest in structured, authoritative corporate content with clear sourcing. Community engagement may matter for other reasons, but it does not contribute meaningfully to AI citation share in these verticals.

3. For crypto, product analytics, and HR tech: Community presence has measurable citation value. Detailed Reddit comparison threads, technical Medium posts, and structured developer forum answers can supplement corporate content reach.

What This Means For How You Strategize For LLM Visibility

Across all three parts of this study, the consistent finding is that AI citation is not primarily a writing quality problem.

Part 2 showed it is a content architecture problem: Thin single-intent pages are structurally locked out regardless of how well they’re written. This piece shows the same logic applies inside the content itself.

The aggregate writing signals table is the most important chart in this analysis. Not because it shows you what to do, but because it shows how much of what the AI SEO/GEO/AEO industry is telling you doesn’t survive cross-vertical scrutiny. Word count, list density, named entity counts … all flat or negative at the aggregate. The signals that work are vertical-specific and smaller than our industry’s consensus implies.

The meta-lesson from this analysis is that findings are vertical (and probably topic) specific, which is no different in SEO.

This part concludes the Science of AI – for now. Because the AI ecosystem is constantly changing.

Methodology

We analyzed ~98,000 ChatGPT citation rows pulled from approximately 1.2 million ChatGPT responses from Gauge.

Because AI behaves differently depending on the topic, we isolated the data across seven distinct, verified verticals to ensure the findings weren’t skewed by one specific industry.

Analyzed verticals:

  • B2B SaaS
  • Finance
  • Healthcare
  • Education
  • Crypto
  • HR Tech
  • Product Analytics

Featured Image: CoreDESIGN/Shutterstock; Paulo Bobita/Search Engine Journal

https://www.searchenginejournal.com/the-science-of-what-ai-actually-rewards/570849/




So Your Traffic Tanked: What Smart CMOs Do Next

We’ve all seen it. Brands with healthy websites and excellent content have been watching their organic traffic from Google’s SERP erode for years. In a recent webinar hosted by Search Engine Journal, guest speaker Nikhil Lai, principal analyst of Performance Marketing for Forrester Research, estimated his clients are losing between 10 and 40% of organic and direct traffic year-over-year.

However, a stunning bright spot is this: Lai said referral traffic from answer engines is growing 40% month over month. Visitors arriving from those engines convert at two to four times the rate of traditional search visitors, spend three times as long on site, and arrive with queries averaging 23 words, compared to the three or four words that defined the last decade of search.

Lai asserted that the channel driving this shift deserves a seat at the CMO’s table. Answer engines influence brand perception before purchase intent forms, which makes answer engine optimization (AEO) a brand investment, and puts budget and measurement decisions at the CMO level.

Here is the strategic roadmap Lai laid out at SEJ Live. He highlighted the decisions, org structures, and measurement frameworks that will move AEO from a search team initiative to a C-suite priority.

Answer Engines Build Demand Before Buyers Know What They Want

Classic search captures intent that already exists. A user types “running shoes,” clicks a result, and evaluates options. Answer engines operate earlier and differently: users hold extended conversations with large datasets, rarely click through, and leave those sessions with specific brand associations formed across multiple follow-up questions.

A user who once searched “running shoes” now asks ChatGPT, “What’s the best shoe for overpronation with wide feet in cold weather on pavement?” They exit that conversation with a brand name in mind and search for it directly. Your brand appeared in an AI conversation before the user ever reached your site. Every day, demand generation is created from users’ research sessions.

The Forrester data Lai presented reinforces the quality of that exposure: Sessions on answer engines average 23 minutes, with users asking five to eight follow-up questions per session. Each turn is another brand impression. The click-through rate stays low; the conversion rate on the traffic that does arrive runs two to four times higher than search-sourced traffic, with stronger average order value and lifetime value.

Brand familiarity is built in answer engines before purchase intent crystallizes in the user’s mind.

SEO Is The Foundation Of AEO

The brands pulling back on SEO investment in response to AEO are making a costly mistake. Lai put it directly: 85 to 90% of current SEO best practices remain fully valid for answer engine visibility.

Google’s E-E-A-T framework (experience, expertise, authoritativeness, trustworthiness) still governs how quality is evaluated across every index. Site architecture, mobile load speed, structured data, and indexation hygiene all strengthen performance across every engine. Every alternative index (Bing’s, Brave’s) is benchmarked against Google’s for completeness. Every bot (GPTBot, Claudebot, Perplexitybot) is benchmarked against Googlebot for sophistication.

SEO is the infrastructure on which AEO runs. The shift is an expansion of scope and emphasis, but AEO is not a replacement of SEO fundamentals.

What changes is where additional effort goes: natural-language FAQ optimization, off-site authority building, pre-rendering for less sophisticated bots, and a measurement framework built around share of voice rather than click volume.

Bing Is Now Your Distribution Network For Every Non-Google Engine

Most answer engines outside Google draw primarily from Bing’s index.

Bing evaluates credibility by weighting what others say about your brand more heavily than what your own site claims. This explains why Reddit threads, Quora answers, Wikipedia entries, G2 reviews, YouTube videos, and Trustpilot pages dominate AI-generated answers. The off-site web has become the primary source of record for how AI describes your brand.

The immediate tactical implication: Push every sitemap update directly to Bing via the IndexNow protocol. This triggers Bingbot to crawl fresh content and feeds that content into Perplexity, ChatGPT, and the broader answer engine ecosystem faster than waiting for organic discovery.

Bing’s index remains the fastest route to non-Google answer engine visibility. Perplexity is building its own index (Sonar), and OpenAI has signaled plans to build or acquire one, but Bing is the distribution network that matters today.

AEO Requires Cross-Functional Ownership

AEO arguably spans more functions than SEO, with these three in common with SEO: content, web development, and paid search. AEO also more strongly interfaces with PR, brand marketing, and social media.

PR earns a seat because off-site authority outweighs on-site signals in AEO. Brand mentions in publications, influencer mentions, and third-party reviews all directly shape how answer engines describe your brand.

Social belongs in the room because Reddit threads and Facebook group discussions show up in AI-generated answers. Community management and reputation management, previously handled separately from SEO, are now integral to AEO. When your social listening data reaches content teams before they draft, the content responds to the questions buyers are actually asking. When it doesn’t, you’re optimizing for questions nobody asked.

Lai proposed two organizational models that work to capture the opportunities inherent in AEO:

  1. Center of Excellence: A senior SEO specialist evolves into an AEO evangelist, runs a COE, and publishes cross-functional standards: clear rules like “every piece of content must answer these five questions” or “every page must include author schema.”
  2. AI Orchestrator: A dedicated hire who builds agents to handle repeatable AEO tasks (schema implementation, JavaScript reduction, FAQ content creation) and governs the cross-functional workflow with published guidelines for all stakeholders.

The CMO’s decision is which model fits the organization’s scale, and whether to build it internally or partner with an agency that has already built the infrastructure.

The Content Strategy That Wins In AI Responses

Long-form skyscraper content is an ancient relic. Answer engines reward precise, specific answers to real questions, delivered succinctly and across multiple formats. Lai framed this as Forrester’s question-to-content framework: Every piece of content maps directly to a FAQ being asked on answer engines, including the follow-up questions that emerge within a single session.

Five content moves that produce results:

  1. Build surround-sound FAQ coverage. Create glossaries, FAQ pages, videos, and blog posts that address the same topic cluster from different angles. When Claudebot crawls 38,000 pages for every referred page visit (per Cloudflare data), each page it indexes is an opportunity to signal topical authority. Volume and variety matter.
  2. Publish direct competitor comparisons. Users ask answer engines to compare brands. Brands that create honest, data-backed comparison guides are gaining prominent visibility, because they directly answer the queries being asked that pit a brand against its competitors. This was once a taboo content format; it has become a competitive requirement.
  3. Treat off-site syndication as the new backlinking. Hosting AMAs on Reddit, answering questions on Quora, and contributing to industry publications that rank in AI responses all earn the off-site authority that answer engines weigh most heavily. Give third-party voices data and perspective they couldn’t generate themselves, and they will produce mentions that shape how AI describes your brand.
  4. Pre-render pages for bot access. The bots crawling your site lack the compute budget to render JavaScript-heavy pages. Claudebot’s 38,000:1 crawl-to-referral ratio compared to Googlebot’s 5:1 ratio reflects this sophistication gap. Pre-rendering a JavaScript-free version for bots while serving the full experience to human visitors ensures your content gets indexed across every engine. Over time, limit the amount of JavaScript on site. Have content directly in HTML so bots can understand your content, and index it more often. The more you’re crawled and indexed, the more visible you become.
  5. Create unique content. Lai said, “Being distinctive, differentiated, and unique will help your brand stand out in a sea of sameness. Implicit in all this is that you need a lot more content, greater content velocity and diversity, which means you can use AI to create content. Google won’t automatically penalize AI-created content unless it lacks the watermarks of human authorship. The syntax and diction have to be natural. Use AI to create content, but don’t make it seem AI-generated. Get down into the details. It’s not enough to say your product is great. Explain why in different temperatures, conditions, the thickness, and so on, to satisfy long-tail intent.”

Replace Legacy KPIs With Metrics That Predict Market Share

The internal conversation, Lai said, he hears most from Forrester clients: “The hardest part of this transition from SEO to AEO has been trying to convince management to not focus as much on CTR and traffic. Those were indicators of organic authority. They are no longer reliable indicators.

“The new KPIs to focus on are visibility and share of voice. Share of voice can be measured in many ways. The most common are citation share: how often is my brand cited, how often is my content linked, of the opportunities I have to be cited; and mention share: how often is my brand mentioned of the opportunities I have to be mentioned. I’m also seeing more clients look into citation attempts: how often is ChatGPT trying to cite my content, and are there things I can do on the back end of my site to make that citation attempt score go up? Those are the new indicators of authority,” said Lai.

These metrics connect directly to branded search volume, which Lai called “the single strongest leading indicator of market share growth.” The chain of logic to present to the board: higher citation and mention share drives more branded searches, which converts at higher rates, which compounds into measurable market share gains against competitors.

Lai said he expects Google to add citation metrics to Search Console once AI Max adoption reaches critical mass, and an OpenAI Analytics product before year-end.

For now, Lai suggested, the best course of action is to establish a baseline with your current SEO platform and track the directional trend. Lai contended that, to address concerns of accuracy within today’s popular SEO tools of answer engine mentions, even imperfect measurement reveals which content clusters are earning citations and which need rebuilding.

The Agentic Phase Starts The Clock On B2B Urgency

Answer engines are moving from conversation to action. The current phase, characterized by extended back-and-forth with large datasets, is the warm-up. The agentic phase is defined by engines’ booking, filing, researching, and purchasing on users’ behalf. This will mean fewer clicks, longer sessions, and richer intent signals available to advertisers.

For B2B CMOs, the urgency is immediate. Forrester research shows GenAI has already become the number one source of information for business buyers evaluating purchases of $1 million or more, coming in ahead of customer references, vendor websites, and social media. Your largest deals are being influenced by AI conversations before your sales team enters the picture.

AEO visibility in B2B is a current-pipeline variable that requires immediate attention.

The brands building complete search strategies now, covering answer engines, on-site conversational search, and structured data across every indexed channel, will own discovery and have greater control over brand perception in the next phase of buying behavior.

The window to gain an early-mover competitive advantage is shrinking, before AEO visibility becomes just another standard expectation everyone has to meet.

Key Takeaways For CMOs

  • Reframe the traffic story. Lower overall traffic volume paired with two-to-four-times higher conversion rates is a net performance gain. Build that case proactively before your CEO draws the wrong conclusion from a falling traffic chart.
  • Fund AEO as an upper-funnel brand channel. That means applying the same budget logic, measurement frameworks, and executive ownership you would bring to any major brand awareness investment, where success is measured in visibility, perception, and long-term share of voice rather than clicks and conversions.
  • Move to share-of-voice KPIs. Citation share and mention share drive branded search volume, which drives market share. Make that causal chain visible to your leadership team.
  • Assign cross-functional ownership with clear governance. Choose between a center of excellence or an AI orchestrator model and make that structural decision this quarter.
  • Prioritize off-site authority as a content strategy responsibility. Reddit, Quora, third-party publications, and YouTube shape AI’s perception of your brand. PR and social teams own the channels that matter most for AEO.
  • Push every sitemap update to Bing via IndexNow. Bing’s index feeds most non-Google answer engines. This is a 15-minute technical change with compounding distribution benefits.
  • Use AI to help with content, but always apply human editing for authority. Content that reads as machine-generated loses trust across every engine, including Google.

What Does A Smart CMO Do Next?

Start with a 90-day experiment using some or all of these strategies.

Audit your current citation and mention share in one category using your existing SEO platform. Identify three high-intent FAQ clusters where your brand should be visible and build surround-sound content for each: a dedicated FAQ page, a comparison guide, and one off-site piece in a publication that appears in AI responses. Push fresh sitemaps to Bing. Track citation share and branded search volume at 30, 60, and 90 days.

The data may make the investment case for broader rollout. If not, tweak your approach. The brands moving first will capture the highest-quality traffic at the lowest incremental cost, and set the citation baseline that becomes progressively harder for competitors to close.

The full webinar is available on demand.

More Resources:


Featured Image: Dmitry Demidovich/Shutterstock

https://www.searchenginejournal.com/so-your-traffic-tanked-what-smart-cmos-do-next/570708/