Answer Engine Optimization: How To Get Your Content Into AI Responses via @sejournal, @slobodanmanic

This is Part 2 in a five-part series on optimizing websites for the agentic web. Part 1 covered the evolution from SEO to AAIO and why the shift matters. This article gets practical: how AI systems actually select content, and what you can do about it.

AI Doesn’t Rank Pages. It Selects Fragments.

Traditional search ranks whole pages. AI search does something fundamentally different.

Microsoft’s Krishna Madhavan, principal product manager on the Bing team, described the shift in October 2025: AI assistants “break content down, a process called parsing, into smaller, structured pieces that can be evaluated for authority and relevance. Those pieces are then assembled into answers, often drawing from multiple sources to create a single, coherent response.”

This is the core insight. AI doesn’t pick the best page and show it. It picks the best fragments from many pages and weaves them together. Your page might rank No. 1 on Google and still not get cited in an AI response if its content isn’t structured in fragments that AI can extract.

The numbers show the shift is real. According to the Conductor AEO/GEO Benchmarks Report (January 2026; 13,770 domains, 17 million AI responses), AI traffic now accounts for 1.08% of all website sessions, growing roughly 1% month over month. Microsoft reported that AI referrals to top websites spiked 357% year-over-year in June 2025, reaching 1.13 billion visits. Small numbers today, compounding fast.

One in four Google searches now triggers an AI Overview. In healthcare, it’s nearly one in two. The surface area is growing, and the content that fills these answers has to come from somewhere. The question is whether it comes from you.

The Research: What Actually Gets Cited

The academic research on what makes content citable in AI responses has matured rapidly. The foundational paper, “GEO: Generative Engine Optimization” (Princeton, IIT Delhi, Georgia Tech, published at KDD 2024), tested nine optimization strategies and found that GEO techniques could boost visibility by up to 40% in AI responses. The most effective single technique was citing credible sources, which produced a 115.1% visibility increase for websites that weren’t already ranking in the top positions.

A counterintuitive finding: Writing in an authoritative or persuasive tone did not improve AI visibility. AI systems don’t respond to rhetorical style. They respond to verifiable information.

Since then, 2025 brought a wave of follow-up research that tested these ideas on real production AI engines rather than simulated ones.

The University of Toronto study (September 2025) was the first large-scale analysis across ChatGPT, Perplexity, Gemini, and Claude. Their most striking finding: AI search overwhelmingly favors earned media. In consumer electronics, AI cited third-party authoritative sources 92.1% of the time, compared to Google’s 54.1%. Automotive showed a similar pattern at 81.9% versus 45.1%. In other words, it’s not just how you write content, but whose domain it appears on. Press coverage, product reviews on independent websites, and mentions on industry publications carry far more weight in AI responses than your own website.

Carnegie Mellon’s AutoGEO study (October 2025) used automated methods to discover what generative engines actually prefer. The results showed up to 50.99% improvement over the best baseline, with universal preferences emerging across engines: comprehensive topic coverage, factual accuracy with citations, clear logical structure with headings and lists, and direct answers to queries.

The GEO-16 framework (September 2025) analyzed 1,702 real citations from Brave, Google AI Overviews, and Perplexity. It identified 16 on-page quality factors that predict citation likelihood. The top three: metadata and freshness, semantic HTML, and structured data. Technical on-page factors matter as much as the quality of the writing itself.

And a reality check from Columbia and MIT’s ecommerce study (November 2025): of 15 common content rewriting heuristics, 10 produced negligible or negative results. The optimization strategies that did work converged toward truthfulness, user intent alignment, and competitive differentiation. Not tricks. Substance.

The overall pattern across all of this research: AI systems reward clarity, factual accuracy, and structure. They don’t reward marketing language, persuasion tactics, or keyword density.

Content Structure That Earns Citations

Based on the research and official guidance from Microsoft and Google, here’s what structurally makes content citable.

Heading hierarchy matters more than ever. Use descriptive H2 and H3 headings that each cover one specific idea. Microsoft lists strong headings as “signals that help AI know where a complete idea starts and ends.” Vague headings like “Learn More” or “Overview” give AI nothing to work with. A heading like “How AI parses content differently than search engines” tells the system exactly what the section contains.

Q&A format is native to AI. Write questions as headings with direct answers below them. Microsoft notes that “assistants can often lift these pairs word for word into AI-generated responses.” If your content answers the question someone asks an AI, and it’s structured as a clear question-and-answer pair, you’ve made the AI’s job easy.

Make content snippable. Bulleted and numbered lists, comparison tables, step-by-step instructions. These formats give AI clean, extractable fragments. A paragraph buried in a wall of text is harder for AI to isolate than the same information presented as a three-item list.

Front-load the answer. Start sections with the key information, then provide context. If someone asks, “What temperature should I bake bread at?” and your content opens with a two-paragraph history of bread making before mentioning 375°F, you’ll lose the citation to a competitor who leads with the answer.

Keep sections self-contained. Each section should make sense on its own, without requiring the reader to have read the previous section. AI extracts fragments. If your fragment only makes sense in the context of the whole page, it won’t be selected.

An important technical note from Microsoft: “Don’t hide important answers in tabs or expandable menus: AI systems may not render hidden content, so key details can be skipped.” FAQ answers collapsed inside an expandable menu, product specs hidden behind tabs, content that requires interaction to reveal: it may all be invisible to AI. If information is important, it needs to be in the visible HTML.

Authority Signals For AI

E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) isn’t just a Google concept anymore. It’s what AI systems look for across the board, even if they don’t use the term.

Microsoft’s October 2025 guidance describes the baseline: success starts with content that is “fresh, authoritative, structured, and semantically clear.” On the clarity side, they’re specific: “avoid vague language. Terms like innovative or eco mean little without specifics. Instead, anchor claims in measurable facts.” Saying something is “next-gen” or “cutting-edge” without context leaves AI unsure how to classify it.

The research backs this up. The original GEO paper found that writing in a persuasive or authoritative tone did not improve AI visibility. Facts and cited sources did. Marketing language doesn’t impress algorithms.

This connects to the University of Toronto’s finding about earned media dominance. AI systems trust third-party validation more than self-promotion. In consumer electronics, AI cited third-party authoritative sources 92.1% of the time compared to Google’s 54.1%. The implication: getting your expertise published on industry websites, earning press coverage, and building a presence on authoritative platforms matters more for AI visibility than perfecting the copy on your own site.

Freshness is a signal, not a bonus. Stale content rarely gets cited. Krishna Madhavan said at Pubcon Cyber Week: “Stale or missing content will constrain the amount of retrieval we can do and push agents toward alternative sources.”

Schema Markup: From Text To Knowledge

Microsoft’s October 2025 post devotes an entire section to schema. They describe it as code that “turns plain text into structured data that machines can interpret with confidence.” Schema can label your content as a product, review, FAQ, or event, giving AI systems explicit context instead of forcing them to guess. Krishna Madhavan reinforced this at Pubcon: “Schemas are super useful. They help the system discern exactly what your information is without us having to guess.”

The GEO-16 framework confirms this from the academic side. Structured data was one of the top three factors predicting AI citation likelihood, alongside metadata/freshness and semantic HTML.

The schema types that matter most for AI visibility:

  • FAQPage for question-and-answer content (directly maps to how AI formats responses).
  • HowTo for step-by-step instructions.
  • Product with Offer, AggregateRating, and Review for ecommerce.
  • Article/BlogPosting for content with clear authorship and dates.
  • Organization for business identity.

Pair structured data with IndexNow for freshness. As the Bing Webmaster Blog put it: “IndexNow tells search engines that something has changed, while structured data tells them what has changed. Together, they improve both speed and accuracy in indexing.”

Crawler Permissions: Who Gets In

AI search engines use distinct crawlers, and most let you control training and search access separately. Here’s who to allow.

Bot Platform Purpose Robots.txt Token
OAI-SearchBot ChatGPT Search index OAI-SearchBot
GPTBot OpenAI Model training GPTBot
ChatGPT-User ChatGPT On-demand browsing ChatGPT-User
Bingbot Microsoft Copilot Search + AI Bingbot
Googlebot Google AI Overviews Search + AI Googlebot
Google-Extended Google Gemini training Google-Extended
PerplexityBot Perplexity Search + index PerplexityBot
Perplexity-User Perplexity On-demand browsing Perplexity-User
ClaudeBot Anthropic Training + retrieval ClaudeBot

A sensible robots.txt configuration might allow search crawlers while blocking training:

User-agent: OAI-SearchBot
Allow: / User-agent: ChatGPT-User
Allow: / User-agent: GPTBot
Disallow: / User-agent: Google-Extended
Disallow: /

OpenAI provides the cleanest bot separation. You can allow OAI-SearchBot (so your content appears in ChatGPT search) while blocking GPTBot (so it’s not used for model training). Google’s controls are less granular: blocking Google-Extended prevents Gemini training but has no effect on AI Overviews, which use Googlebot.

OpenAI also offers the most specific technical recommendation of any AI search provider. For their Atlas browser (which uses a standard Chrome user agent, not a bot identifier), they recommend following WAI-ARIA best practices: “Add descriptive roles, labels, and states to interactive elements like buttons, menus, and forms. This helps ChatGPT recognize what each element does and interact with your site more accurately.” Accessibility and AI agent compatibility are the same work.

A caveat on Perplexity: while their documentation states they respect robots.txt, Cloudflare documented in August 2025 that Perplexity uses undeclared crawlers with rotating IPs and spoofed browser user agents to bypass no-crawl directives. This is a contested claim, but it’s worth knowing.

For revenue, Perplexity is the only platform currently offering publisher compensation. Their Comet Plus program provides an 80/20 revenue split (publishers keep 80%) across direct visits, search citations, and agent actions.

Google Vs. Microsoft: Two Philosophies

The contrast between Google and Microsoft on AEO is striking enough to be its own story.

Google says: just do good SEO. Their official documentation is deliberately minimalist: “There are no additional requirements to appear in AI Overviews or AI Mode, nor other special optimizations necessary.” They add that you “don’t need to create new machine readable files, AI text files, or markup to appear in these features.”

Google recommends helpful, reliable, people-first content demonstrating E-E-A-T. Standard structured data. Good page experience. Technical basics. Nothing AI-specific.

Microsoft says: here’s the playbook. Their October 2025 blog post and January 2026 guide provide detailed, actionable guidance. Specific heading structures. Schema recommendations. Content formatting rules. Concrete examples (an AEO product description vs. a GEO product description). Warnings about content hidden in tabs and expandable menus. A framework for thinking about crawled data, product feeds, and live website data as three distinct layers.

What explains the difference? Partly market position. Google dominates search and has less incentive to help publishers optimize for AI features that might reduce clicks to their websites. Microsoft, with Bing’s roughly 8% market share, benefits from providing publishers with reasons to optimize specifically for their ecosystem.

But there’s a practical takeaway: Microsoft’s guidance isn’t Bing-specific. The principles of structured content, clear headings, snippable formats, schema markup, and expert authority are universal. Following Microsoft’s playbook improves your content for every AI system, including Google’s. Google just won’t tell you that.

Measuring AI Visibility

This is the hard part. Traditional SEO has Google Search Console. AI visibility is still fragmented.

Ahrefs analyzed 1.9 million citations from 1 million AI Overviews and found that 76% of citations come from pages already ranking in Google’s top 10. The median ranking for the most-cited URLs was position 2. Traditional ranking still matters for AI citation, but being No. 1 is “a coin flip at best” for getting cited.

The traffic impact is significant. Ahrefs found that AI Overviews correlate with 58% lower click-through rates for the No. 1 position. Seer Interactive reported a 61% organic CTR drop for queries with AI Overviews. But being cited within the AI Overview gives 35% more organic clicks compared to not being cited. Citation is the new ranking.

For tracking, the tool landscape is emerging:

Tool What It Tracks Starting Price
Profound Citations across ChatGPT, Perplexity, Copilot, Google AIOs From $99/mo
Peec.ai Brand mentions across ChatGPT, Gemini, Claude, Perplexity From ~$95/mo
Advanced Web Ranking AIO presence tracking in Google Included in plans
Bing Webmaster Tools AI Performance Report for Copilot Free

Bing Webmaster Tools is the easiest starting point. It’s free, and the new AI Performance Report shows how your content performs in Copilot citations. For ChatGPT specifically, track utm_source=chatgpt.com in your analytics. OpenAI automatically appends this to referral URLs.

Conductor’s January 2026 report found that 87.4% of AI referral traffic comes from ChatGPT. That’s one platform dominating the space, which makes tracking it particularly important.

Key Takeaways

  • AI selects fragments, not pages. Structure your content in self-contained, extractable sections with descriptive headings that signal where each idea starts and ends.
  • Clarity beats persuasion. Factual accuracy, cited sources, and direct answers outperform authoritative tone and marketing language. The research consistently shows this.
  • Earned media dominates brand content in AI citations. Press coverage, third-party reviews, and authoritative mentions on other websites carry more weight than your own pages. Build presence beyond your domain.
  • Schema markup is a force multiplier. FAQPage, HowTo, Product, and Article schemas make your content machine-readable. Pair with IndexNow for freshness.
  • Follow Microsoft’s playbook, even for Google. Google says “just do good SEO.” Microsoft provides specific, actionable guidance that improves content for every AI system, Google’s included.
  • Separate training from search in your robots.txt. Allow search crawlers (OAI-SearchBot, Bingbot, PerplexityBot) while blocking training crawlers (GPTBot, Google-Extended) if that’s your preference. You have more control than you might think.
  • Track AI visibility now. Use Bing Webmaster Tools (free), monitor utm_source=chatgpt.com in analytics, and consider dedicated tools as the measurement space matures.

Traditional SEO asked: “How do I rank?” AEO asks: “How do I become the fragment that gets selected?” The answer isn’t a single trick. It’s clear structure, verifiable expertise, and content that AI can confidently extract and cite.

Up next in Part 3: the protocols powering the agentic web, including MCP, A2A, NLWeb, and AGENTS.md, and how they fit together.

More Resources:


This was originally published on No Hacks.


Featured Image: Meepian Graphic/Shutterstock

https://www.searchenginejournal.com/answer-engine-optimization-how-to-get-your-content-into-ai-responses/570055/




Google Takes Search Live Global With Gemini 3.1 Flash Live via @sejournal, @MattGSouthern

Google is expanding Search Live to more than 200 countries and territories, bringing voice and camera conversations to AI Mode globally.

The expansion is powered by Gemini 3.1 Flash Live, a new audio model that Google calls its highest-quality yet. It’s inherently multilingual, so you can speak with Search in your preferred language without switching settings.

Search Live was previously limited to the U.S.

What’s Changing

Search Live lets you talk to Google Search inside AI Mode instead of typing a query. You ask a question out loud and get an audio response, then continue with follow-ups. Web links appear on screen alongside the voice responses.

The feature also supports camera input. Point your phone at a product label or a piece of equipment and ask Search about what it sees. Google Lens users can tap a “Live” option to start a conversation about what’s in the camera view.

With today’s expansion, both voice and camera capabilities are available in every market where AI Mode is active.

The New Model

Gemini 3.1 Flash Live replaces the previous audio model powering Search Live. Google published benchmark results alongside the announcement.

Gemini Live can now follow a conversation thread for twice as long as the previous model, according to Google. Though the company didn’t specify what the previous limit was.

Beyond Search, 3.1 Flash Live is available to developers in preview through the Gemini Live API in Google AI Studio.

Why This Matters

Search Live turns search into a spoken conversation with camera input. Until now, the feature was limited to U.S. users. Today’s expansion makes it available in the markets where AI Mode is live, across more than 200 countries and territories.

There’s no public data yet on how many people use Search Live or how it affects query volume. But Google has been building toward this for the past year. The company launched Search Live in June, added video input in July, and upgraded to Gemini 2.5 Flash Native Audio in December. Each update expanded what the feature can do and who can use it.

Looking Ahead

Google didn’t announce additional Search Live features alongside this expansion. The focus is on geographic reach and the underlying model upgrade.

How the model performs in production across different languages and markets will be worth watching as adoption data becomes available.

https://www.searchenginejournal.com/google-takes-search-live-global-with-gemini-3-1-flash-live/570602/




NanoClaw Creator Loses SEO Battle To Impostor Website via @sejournal, @MattGSouthern

The creator of NanoClaw, an open source AI agent platform with over 18,000 GitHub stars, says Google is ranking a fake website above his project’s real site.

In tests conducted on March 5, an impostor site ranked at the top of Google for the project’s own name. The real website, nanoclaw.dev, did not appear in the first several pages of results.

What’s Happening

Gavriel Cohen, a software engineer and former Wix developer, posted a thread on X describing the problem.

Cohen launched NanoClaw in early February as a security-focused alternative to OpenClaw, the viral open source AI agent platform. The project grew quickly. VentureBeat covered it, The Register profiled Cohen, and AI researcher Andrej Karpathy publicly praised the project’s architecture.

Around February 8, someone registered nanoclaw.net and created an auto-generated site scraped from the project’s GitHub README. Cohen said he didn’t have a website at the time because the GitHub repo was the project.

As the project gained press coverage, people kept contacting him about problems with “his” website. It wasn’t his.

He built the real site at nanoclaw.dev and then took several standard SEO and remediation steps. He linked it from the GitHub repo. He added structured data. He submitted to Google Search Console. He filed takedown notices with Google, Cloudflare, and the domain registrar. Publications covering the project linked to nanoclaw.dev.

As of March 5, the impostor site still ranked above the real one.

In his thread, Cohen wrote that the fake site is “showing factually wrong information about the project and falsifying its publication dates.” He called the situation “a live, active security risk” because the person running nanoclaw.net could replace the page content with malicious download links or a phishing page at any time.

The Hacker News thread about Cohen’s complaint reached 315 points and over 150 comments within hours.

Same Problem Across Search Engines

Hacker News commenters tested the same search on other engines and found the problem extends beyond Google.

One commenter reported that the fake site ranked #1 on DuckDuckGo and #3 on Kagi, while the real site didn’t appear on DuckDuckGo at all. Another found that Bing, Brave, Ecosia, and Qwant all showed the fake site in top positions. Mojeek was the only engine tested that ranked the real site and excluded the fake one.

Why This Matters

In the past, Google’s John Mueller said that copied content consistently ranking above the original may point to a site quality problem. Mueller suggested site owners reassess their overall quality if this keeps happening.

Cohen’s case tests that logic. His project has 18,000 GitHub stars, coverage from CNBC, VentureBeat, and The Register, a Karpathy endorsement, and a blog post that hit #1 on Hacker News. Every social profile and the GitHub repo itself point to nanoclaw.dev. On its face, many of the visible signals appear to favor the real site.

The fact that Hacker News commenters reported similar results across multiple search engines suggests something deeper than a Google-specific bug. One possible factor is timing, as the fake site appears to have been indexed before the real site launched.

For anyone building a new product, the key takeaway here is to reconsider the right time to register a domain. Cohen focused on shipping code before building a website. That’s standard open source practice, but search engines indexed the impostor first, and correcting that after the fact proved harder than any of the recommended steps suggest it should be.

Looking Ahead

Cohen has not indicated whether Google responded to his takedown requests. One SEO practitioner in the Hacker News thread offered concrete advice, including mapping the fake site’s backlinks and contacting publications that accidentally linked to the wrong domain.

The situation remains unresolved. Google had not commented at the time of publishing.


Featured Image: Elnur/Shutterstock

https://www.searchenginejournal.com/nanoclaw-creator-loses-seo-battle-to-impostor-website/568885/




The Verified Source Pack Agents Trust First via @sejournal, @DuaneForrester

Structured data helped machines interpret pages. It reduced ambiguity. It made entities and attributes legible to crawlers that were otherwise guessing.

Agents change the job because they do not just interpret pages. They decide, summarize, recommend, and sometimes execute. That means they need more than “this page is about X.” They need “this is the official truth about X, it is current, and you can verify it.”

That is the gap most teams have not addressed yet.

Image Credit: Duane Forrester

If you are a technical SEO, you’ve already done the hard part of this job in other forms. You’ve built crawl paths, canonicalization systems, change control habits, structured data governance, and index hygiene. A Verified Source Pack is the next packaging layer. It is not a replacement for pages. It is not a replacement for schema. It is a distribution artifact that sits beside both.

The simplest framing is this. In an agent world, brands ship a machine-consumable “official truth” pack. It includes structured facts and operational rules an agent can safely ingest: products, pricing rules, inventory behavior, guarantees, credentials, policies, support workflows, and explicit constraints. It is delivered with provenance, versioning, and a clear discovery path.

Call it a Verified Source Pack. Call it an Official Knowledge Pack. Call it an Agent Source Object. The naming will evolve, but the need will not. The need is here, today.

Why This Matters Now

Agents optimize for trust and completion.

If an agent is going to recommend a product, explain your return policy, determine warranty eligibility, estimate delivery windows, or suggest a plan that includes you, it needs facts that do not wobble. If it can’t get those facts with confidence, it does one of three things. It hedges and becomes vague. It pulls from third parties that look more structured. Or it avoids recommending you at all because the risk of being wrong is too high.

This is why classic brand signals are not enough. Brand matters to humans. Agents need machine trust, and machine trust is not vibes. It is structure, provenance, and freshness.

We Are Early, And That’s Fine

Search had 25+ years to standardize conventions. This new ecosystem is younger and messier. There is no single, universally adopted “truth pack” standard today.

What exists instead is a set of practical primitives you can assemble in a way that works now and remains compatible with the future. Think of this as the early sitemap era. If you shipped clean signals early, you won. The mechanics changed over time, but the principle held.

Where Llms.txt Fits, Even With Its Limits

You’ll hear about /llms.txt in this conversation, as it is a proposal for publishing a curated map of your site intended for LLMs and agents at inference time. The spec is here: https://llmstxt.org/.

The critical point is what it is not. It is not a vendor-backed commitment. No major LLM provider has publicly signed on saying “we will consume llms.txt” as a standard behavior. That does not mean systems ignore it, but it does mean you should treat it as more of a directional hint, not a trust mechanism.

What is interesting, and worth calling out, is that solution providers are already responding. Yoast has documented how it generates llms.txt, including update behavior, which signals that parts of the ecosystem believe this will matter even if the platforms have not formally blessed it yet.

You can see similar “this is becoming a thing” signals from other platforms. For example, Optimizely recently published guidance on llms.txt as well.

So, I mention llms.txt as an example of a discovery layer. It is not a guaranteed ingestion path. It is a convenience map that can point at your real asset, which is the verified pack.

The Verified Source Pack, Explained As A Complete System

Verified Source Pack has four parts. Each part answers a different question an agent implicitly asks.

First, The Content

What is the truth you are publishing?

This is not “content marketing.” This is operational truth the business would stand behind. In ecommerce, for example, it includes your product catalog, your pricing rules, your inventory behavior, shipping and returns policies, warranty terms, guarantees, service coverage, support workflows, and explicit constraints. Constraints matter because agents otherwise guess. If you do not clearly state exclusions, eligibility rules, edge cases, and limits, you are forcing the model to infer them from messy pages or third parties.

Second, The Structure

Can a machine ingest it predictably?

This usually means two modes. A dataset mode for facts that can be downloaded and parsed, and a contract mode for facts that change fast or require live validation.

Dataset mode is boring on purpose. JSON for structured facts. CSV for bulk lists if you have to. A changelog that records what changed and when. The goal is not elegance. The goal is predictable parsing.

Contract mode is where your technical SEO role gets real leverage, because it is the point where you ask your dev team for an endpoint. One clean endpoint that returns the pack index, plus one signed manifest. If you can only get one thing built this quarter, get that.

Third, The Provenance

How does an agent know it is yours and unmodified?

Provenance starts with domain control and TLS, but it should not stop there. Provenance means you version the pack, timestamp it, hash the files, and sign the index. That creates an integrity model that a machine can validate.

If you want a real-world standard to anchor the idea of cryptographically verifiable provenance, C2PA is one of the clearest references. It is best known for media authenticity, but the underlying concepts map cleanly: manifests, hard bindings via hashes, and verifiable claims. Start with the C2PA specifications index here and the technical specification here.

You do not need to implement C2PA end-to-end to benefit from the pattern. The point for SEOs is that “trust” can be made explicit through verifiable artifacts, not implied through branding.

Fourth, Discoverability

Can systems reliably find the pack?

Verified Source Pack that cannot be found is a private internal doc, not an external trust signal. Host it under your domain in a stable, boring path. Link to it from a relevant page like Policies, Support, or Developer docs. Include it in your sitemap. Optionally point to it from llms.txt as a hint.

The SEO-Friendly Build Flow

Here is the same system, but framed as a practical flow you can run with your team.

Start by inventorying your truth domains. Define what the business would defend as official truth. For ecommerce, that is, products, pricing rules, inventory logic, shipping rules, returns policy, warranty terms, guarantees, and support workflow. Add constraint truth as a first-class domain. Write down exclusions, eligibility requirements, and boundaries. If you skip constraints, the agent fills the gap with assumptions.

Next, canonicalize. You do not need perfection, but you need a declared canonical source for each truth domain. If five pages disagree on returns, pick the canonical version and update the others over time. The pack is how you stop the bleeding.

Then ship the pack in two layers. Publish the dataset files and publish a single pack index that references them. The pack index is your “front door” and should include the pack version, last updated time, file URLs, hashes, and verification details.

At this point, you ask for two technical deliverables from your dev team.

  1. Deliverable one is one endpoint. It returns the pack index which gives agents a consistent, requestable source rather than a scraping problem.
  2. Deliverable two is one signed manifest. That can be as simple as a detached signature for the index file, or a signature field embedded in the index. The implementation can vary, but the intent is constant: integrity and provenance.

If your org can publish a callable endpoint, describe it with OpenAPI. It’s a widely used, vendor-neutral way to define API contracts, and it’s already accepted in multiple agent ecosystems, including GPT Actions, Microsoft 365 Copilot API plugins, and Google Vertex AI Extensions.

This matters because it reduces friction, and you are not inventing a bespoke integration. You are publishing a contract that agents and tooling ecosystems already know how to consume.

Finally, operationalize freshness. Add review-by dates and a changelog. Inventory and pricing should be updated frequently or exposed via live endpoints. Policies can be versioned on change. Credentials should update on renewal and revocation events. Support workflows should update when your operations change.

Treat the pack like infrastructure. Infrastructure decays when it has no owner, so assign an owner.

Here’s An Ecommerce Example

Imagine a mid-market ecommerce brand. Today, product attributes live in the catalog, warranty terms live in an FAQ, returns rules live across three pages, shipping exceptions live in a footer, and “what counts as refurbished” exists only in support scripts. Humans can muddle through. Agents cannot.

Verified Source Pack fixes that by creating one coherent, machine-ingestible representation of those truths.

The pack index points to a product catalog dataset, a pricing rules dataset, a returns and shipping policy dataset that includes edge cases, a warranty and guarantee dataset, a support workflow dataset, and a constraints dataset that spells out what is excluded and what requires human confirmation. The index is versioned and signed. The index can be retrieved via an endpoint. The pack is hosted under the brand domain and linked from policy pages.

Now, when an agent asks, “Can I return this item if it was opened?” it has an authoritative, structured place to look. When it asks, “Is this product available in my ZIP code?” the brand can expose a live endpoint. When it needs to summarize warranty terms, it can do so without guessing, and without relying on a third-party blog post from 2019. That is the win you’re after here.

Sidebar: Healthcare, Where Trust Is Regulated

Healthcare teams have extra constraints that ecommerce does not.

First, you must avoid publishing anything that could be interpreted as protected personal information, or that encourages an agent to infer patient-specific conclusions.

Second, you have regulatory boundaries around claims. Treatments, outcomes, eligibility, and recommendations cannot be reduced to marketing copy. They need carefully scoped, auditable statements.

Third, you need change control and auditability. If a policy changes, you need a clear record of what changed and when.

For healthcare, a Verified Source Pack should lean hard into constraints. Spell out what the system can state, and what requires a clinician or a formal consult. Publish provider credentials, service coverage, appointment workflows, billing and insurance boundaries, privacy and security policies, and escalation paths. Sign and version everything. Make review-by dates explicit.

Sidebar: Finance, Where Guardrails Matter As Much As Facts

Finance has a similar trust profile, with different failure modes.

First, advise boundaries. Agents will naturally drift from facts into advice. Your pack should explicitly declare what is informational, what is not advice, and what requires qualified review.

Second, volatility. Rates, terms, eligibility, and fees can change quickly. Live endpoints matter more here than in ecommerce. If you publish a dataset, include “valid through” fields and enforce refresh cadence.

Third, disclosure requirements. Your pack should include the exact disclosure language and conditions required, so the agent is less likely to summarize away legally important details.

A Quick Note On MCP

You will also hear about Model Context Protocol (MCP), which is an open protocol for integrating LLM applications with external data sources and tools. The MCP spec is here.

You do not need MCP to build a Verified Source Pack. The relevance is directional. Agents are moving toward calling authoritative interfaces rather than scraping pages. Your “one endpoint and one signed manifest” is the pragmatic step that keeps you compatible with that future.

The Point, And The Opportunity For Technical SEO Leads

You are not being asked to abandon SEO, but you are being asked to extend it.

In the same way sitemaps and structured data became quiet infrastructure, Verified Source Packs will become quiet infrastructure for agentic retrieval and decisioning. Teams that publish operational truth in a machine-verifiable way reduce ambiguity, reduce downstream risk, and increase the odds they are the source the system trusts first.

If you want a single mental model, use this.

  • Pages persuade humans.
  • Schema clarifies pages.
  • Verified Source Packs package truth for agents.

That’s the new format.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Summit Art Creations/Shutterstock; Paulo Bobita/Search Engine Journal

https://www.searchenginejournal.com/the-verified-source-pack-agents-trust-first/568506/




Why Google Discover Is No Longer Just For Publishers via @sejournal, @theshelleywalsh

At Google Search Central Live in Zurich last December, Clara Soteras spoke about how brands and ecommerce sites can use Discover as a strategy.

Discover is a dominant source of traffic for news publishers, and currently, is a potential channel that has resisted the encroachment of AI. So, I was interested to see how Discover might hold opportunities beyond the newsroom.

However, in Zurich, John Mueller repeated his advice that Google Discover traffic is for free, and someday it can be zero. Much like the reality of Google traffic diminishing for many brands.

So, I sat down with Clara on IMHO to talk about what’s working, what’s breaking, and where the real opportunity lies in 2026.

Clara is head of innovation and digital strategy at AMIC and a professor at the Autonomous University of Barcelona, where she teaches SEO for news at several business schools.

“Discover adds to you some possibility to catch and to impact people that don’t know that they need you.”

Discover Is The Primary Channel, But With A Warning

In an article titled “Why publishers should worry about growing reliance on Google Discover,” the Press Gazette reported that for 2,000 global news and media websites, 68% of Google traffic now comes from Discover, in comparison to 32% from search.

I asked Clara if she thought Discover could offer any salvation to news publishers impacted by AI.

She confirmed what many publishers are experiencing, “Google Discover is the first channel of traffic for the majority of publishers today. And we need to understand that this is a good channel to achieve and to catch different audience, to achieve page views and volume of traffic.”

But Clara was quick to draw a line between Discover and traditional search. The ranking factors are different, and publishers who treat Discover as an extension of their search strategy are making a mistake.

“The basic things are the same, but we need to know that the location, the image, or the title, the headline are really important to rank on Google Discover.”

She also noted that not all content categories perform equally in the feed. Politics, for example, rarely appears. Publishers who want Discover visibility need to lean into lifestyle, sports, and culturally relevant content.

It’s For Free. And Someday It Can Be Zero

At Google Search Central Live in Zurich, John Mueller emphasized a key point that publishers cannot rely on getting 90% of their traffic from a single source. I asked Clara whether we might be in danger of swapping one reliance for another, from Google SERP traffic to Discover, and if publishers should be leveraging other channels.

Clara explained, “In Zurich, John Mueller repeats the same advice that Google is telling in every session that we have with the publishers. They think that the volume of traffic that Google Discover adds to your website is for free, and someday it can be zero.”

That volatility is real, “We know some publishers that start from scratch and achieve a lot of traffic and six months later they need to close the website because they lose all the traffic.”

When I asked what balance she would recommend, Clara said it depends on the size and niche of the publisher, but “you cannot have 90% of Google Discover traffic because if Google Discover decides to not see you tomorrow, you will lose all your audience because it’s not a loyal audience.”

Discover traffic is passive. It’s algorithmically injected into feeds, and Clara suggested that publishers need to diversify into social strategy, work with content creators, and consider building community around their topics.

How Brands Can Win In Discover

Clara’s presentation in Zurich was about a strategy most brands haven’t considered, applying newsroom methodology to Discover for ecommerce and brand sites.

Google has expanded the Discover feed to allow users to follow entities, creators, and companies, not just traditional publishers. YouTube, Instagram, and content creator profiles now appear, and for brands, this opens a different kind of opportunity.

“Search is the first channel probably for commerce because the user knows your brand or knows what they need. Discover adds the possibility to catch and impact and generate impressions to people that don’t know that they need you.”

Her methodology for brands mirrors what high-performing newsrooms do, which is to monitor social conversations and trends, align content to the moment, and move quickly.

“If we decide to create a strategy for a brand, we need to talk about the trend of the day. We need to talk about our product and service but really near to the trend.”

Clara is currently working with content teams at multiple companies to train them on Discover-specific execution. Headlines need to be more than 13 words, images must be chosen strategically for the format, and brands need to build entity authority through a sustained cluster strategy, publishing around the same entity on different days.

“For me, some of the best ranking factors are the image, the headline, and working with the entity every day to be a good reference for this entity.”

AI Content Can Rank In Discover, But It Doesn’t Perform

Andy Almeida from Google’s Trust and Safety team for Discover has said that nearly 20% of sites recommended by Discover are AI-generated, coining the memorable phrase “AI slop is taking over the world.” I asked Clara whether AI content is becoming a real threat that could displace legitimate news publisher content, and how publishers can defend against it.

Clara acknowledged the reality that AI content does rank on Discover. But she pointed out that Google’s quality and trust department is actively applying manual penalties when they identify AI content or fake news in the feed.

“If they think that you are publishing AI content or fake news, they can apply and ask that you need to raise or correct your content.”

More importantly, she shared a real-world example from her own clients that illustrates the performance gap between AI and human content.

“I had a client that worked with different AI tools to create basic content, and journalists adapted a little bit of this content. If you see the performance, you see only 100 views for an article versus 12,000 views for another article created by a human.”

Clara’s position is that AI can assist with ideation and strategic planning, but the content itself needs to be created by humans. Human-created content performs better in Discover, and she believes Google will continue to reward it.

“AI can give us ideas and strategic tips but for me it’s important that the content will be created by a human, by a journalist.”

The Opportunity AI Overviews Can’t Touch

With AI Overviews eating evergreen informational queries, I asked Clara what she thinks are the areas of opportunity in 2026.

Clara is currently working on research reports analyzing the impact of AI Overviews on publishers in Spain and the UK, and her data confirms, “ if you work with breaking news, you have an opportunity, because of the top stories module for breaking news.”

Real-time journalism still creates visibility that AI summaries cannot fully replace. For publishers looking ahead to 2026, Clara’s focus is on reinforcing the strengths that machines can’t replicate. Breaking news speed, entity authority, trend alignment, and diversification beyond Discover itself.

Human Expertise Is The Advantage

Google Discover is a powerful channel and in many cases essential, but it’s algorithmically volatile by nature.

However, it’s an opportunity for brands and ecommerce sites. The Discover feed is no longer a news-only space, and Clara’s work applying newsroom methodology to commercial content is an approach that most brands haven’t explored yet.

Where AI is reshaping both search results and the content that feeds them, the competitive advantage is ultimately down to human expertise, editorial judgment, and the ability to move fast on what matters right now.

Watch the full interview with Clara Soteras here:

[embedded content]

Thank you to Clara Soteras for offering her insights and being my guest on IMHO.

More Resources:


This post was originally published on Shelley Edits.


Featured Image: Shelley Walsh/Search Engine Journal

https://www.searchenginejournal.com/why-google-discover-is-no-longer-just-for-publishers/568777/




If AI Can’t Read Your CMS, It Can’t Recommend Your Brand [Webinar] via @sejournal, @lorenbaker

A Practical Audit for Marketing Leaders Using Enterprise-Level Content Management Systems (CMS)

AI-driven search is not a future consideration. It is already shaping how brands are discovered, evaluated, and chosen. 

Yet many CMS platforms were built for a different era of search, one focused on pages and rankings rather than structured content and machine interpretation. If your CMS cannot clearly communicate meaning to AI systems, your visibility is at risk long before a customer ever sees your site.

For CMOs and marketing leaders, this is no longer just an SEO discussion. It is a platform-level question. 

Or is your current website stack quietly limiting performance across discovery and conversion?

In this marketing leader webinar, we walk through a practical CMS audit framework designed to help marketing leaders evaluate whether their enterprise and large-scale website implementation is built for AI-driven search. 

You will gain a clear understanding of what AI readiness means at the system level and how to identify structural gaps before they impact growth.

What You’ll Learn

  • Where enterprise implementations most often fall short in AI driven discovery
  • How AI search is reshaping SEO strategy, content modeling, and conversion performance
  • What defines an AI ready CMS stack, including structured content and flexible architecture

Why Attend?

This webinar offers a strategic lens on whether your CMS is enabling visibility or restricting it. You will leave with a clear framework to assess risk, strengthen your digital foundation, and ensure your platform supports how discovery works today.

Register now to evaluate whether your CMS is prepared for AI-driven search.

🛑 Can’t attend live? Register anyway, and we’ll send the on-demand recording.

https://www.searchenginejournal.com/if-ai-cant-read-your-cms-it-cant-recommend-your-brand/568601/




Google Updates AI Mode Recipe Sites Results In Response To Backlash via @sejournal, @martinibuster

Robby Stein, VP of Product Google Search, posted that Google is updating AI Mode so that it surfaces more links to creators when users search for recipes. Google’s AI Mode has generated controversy by synthesizing multiple recipes into what many have taken to calling Frankenstein recipes. This new update aims to fix that by making it easier to tap and see a link to the recipe sites.

Change To How AI Mode Displays Recipes

What Google did was to create an attractive display of recipes that when clicked opens a side panel that displays recipe images and a summary of the recipe. The user can click from there to visit the recipe site and explore the dish in more depth.

Robby Stein said that this change is already rolled out. I tried variations of Stein’s example keyword phrase (easy recipes for two) and was able to spawn a recipe panel, what I think he’s referring to. On the left is a summary and in this specific AI Mode results, I had to scroll down to get to the images that can be clicked.

Screenshot Of AI Mode Without Images To Click

Scrolling down the page reveals a new topic with images that can be clicked.

Screenshot Of AI Mode Images That Can Be Clicked

The problem with this AI Mode result is that it’s not clear that those images can be clicked. They look like decorative images. It may be that a user will not going to intuitively understand that clicking those images will generate a side panel to the right with more information on that particular dish.

Screenshot Of AI Mode Panel With Recipe

Announcement By Robby Stein Of Google

Google’s Robby Stein made a direct mention of the “feedback” they had received about how AI Mode was handling meal ideas.

According to Robby Stein:

“We’ve heard feedback on recipe results in AI Mode, and we’re making updates to better connect people with recipe creators on the web. Starting today, when you search for meal ideas like “easy dinners for two,” you can tap on the dish to see links to relevant recipe sites, plus a short overview of the dish to help with inspiration.

We’re also planning to bring helpful information like cook time to more recipe results, which testers have found useful for deciding on a recipe. We know there’s more work to be done on this, so stay tuned for future updates.”

He also posted a video of the feature in action:

Featured Image by Shutterstock/Luis Molinero

https://www.searchenginejournal.com/google-updates-ai-mode-recipe-sites-results-in-response-to-backlash/568798/




Google Zero Is A Lie

There is a pervasive narrative doing the rounds in the publishing industry called “Google Zero.” This narrative, embraced by many industry leaders, poses that traffic from Google – Search and Discover – will decline and eventually become negligible.

This Google Zero narrative is entirely false, and extremely dangerous. And I’m going to explain why.

When the concept of “Google Zero” first began to emerge, I thought it could be a useful way to frame the strategic approaches publishers should consider when optimizing for Google. But it’s taken on an entirely different meaning, one that is actively dangerous and downright false.

It’s true, gaining traffic from Google hasn’t gotten easier. Websites need to work harder to grow their share of Google visits, both in Search and Discover. This is not a new development – the writing has been on the wall for nearly two decades.

Google started enriching its search results with all kinds of different elements in 2007, intended to provide exactly the kind of information Google’s users are looking for. The clean list of 10 blue links has long been forgotten.

Since Google began introducing new elements into its search results, every new feature has diverted clicks away from websites. Often, these clicks were channeled toward Google’s own properties like YouTube, Google Maps, or the image search vertical. And increasingly, searches didn’t result in any clicks at all when the right information was shown to the user directly on the results page.

This trend continued with every new feature Google introduced into its results. Many websites were affected. Lawsuits were launched – some of which are still ongoing.

News publishers didn’t really feel the pain, however. On the contrary, the introduction of news carousels on Google’s results increased the traffic Google sent to publishers.

And then AI Overviews arrived, and everybody panicked.

Apparently, The Verge’s Nilay Patel was the first to coin “Google Zero” as a phrase, though I suspect he was more than a little inspired by Sparktoro’s Amanda Natividad and Rand Fishkin, who have been talking about “Zero-Click Marketing” for years.

I understand why Nilay is worried about Google. According to Similarweb, Google traffic to The Verge has been steadily declining since late 2023, predating the launch of AI Overviews.

Similarweb data showing The Verge losing Google traffic
Image Credit: Barry Adams

Interestingly, this graph shows that Google is still the largest organic channel for The Verge, surpassed only by direct visits (which, by the way, are also declining). And you’ll also be interested to know that the periods of strongest Google traffic decreases on The Verge correlate with Google core algorithm updates and Site Reputation Abuse penalties.

I find it funny that The Verge seems to have an existential issue with the SEO industry as a whole. That, too, might contribute to their less-than-stellar performance in Search in recent years. Not to mention the fact that every channel is sending less traffic to The Verge in recent years.

Perhaps it’s not entirely Google’s fault that The Verge is experiencing a decline.

One website’s editor complaining about Google traffic doesn’t make for a narrative. Yet somehow, the Google Zero story has become embedded in the publishing industry, with very little critical analysis.

A few weeks ago, I was at a news-focused conference where one of the speakers presented a slide showing data from Chartbeat. This data indicated a huge decline in Google traffic to many of Chartbeat’s customers.

Chartbeat graph showing 33% Google traffic decline
Image Credit: Barry Adams

The data was published on the Reuters Institute’s website as part of their 2026 predictions, and seems to have been accepted as gospel by many in the industry.

The speaker who presented this slide works for one of my clients. I have access to this client’s Google Search Console data for dozens of their websites across Europe. I know exactly how much Google traffic they’ve lost in the last few years.

They haven’t lost any.

In fact, the speaker’s employer is showing growth in Google traffic across many of their websites. Yet the speaker presented the Chartbeat graph as fact, without any caveat, despite having access to a wealth of data that contradicts it.

It’s not just my clients – Press Gazette recently did a deeper dive into the Google Zero panic, speaking with many UK publishers. A clear consensus emerged: Google traffic isn’t actually declining all that much.

This is supported by data from Similarweb, published by Graphite, showing the actual decline of Google traffic to the top websites on the global web is … drumroll … 2.5%.

Image Credit: Barry AdamsSo, why does the Chartbeat data show such a strong decline, and other sources do not? I have theories. One is that Chartbeat’s data is skewed by several of their largest clients, who may have suffered from Google’s core algorithm updates and Site Reputation Abuse penalties.

The Chartbeat data appears to be a simple aggregate, not taking individual sites’ comparative sizes into account. So when a few big sites experienced strong losses, it would skew the data heavily towards a decline, even when dozens of smaller sites don’t see any meaningful decreases.

When we look at Similarweb’s data on global web traffic, Google is still by far the most-visited website in the world, accounting for nearly 20% of all web visits. This hasn’t changed in any meaningful way in the last few years.

Similarweb data showing Google as the most sivited website in the world
Image Credit: Barry Adams

Despite an abundance of contradicting data, the Google Zero panic has permeated the publishing industry. Not a week goes by without some C-level leader at a publisher declaring a shift away from Google towards other channels for audience growth.

I’m all for diversifying traffic sources. Publishers need to be less reliant on Google for their traffic, and have alternative sources of visitors that can sustain their business model. I’ve been on record saying exactly that for years.

But traffic diversification should not come at the expense of SEO. When you take your eye off the Google ball, you’re making a colossal mistake.

No matter how you interpret the data, Google is still by far the single largest source of visitors for websites. There is literally no other channel that comes close (keeping in mind that direct traffic isn’t a channel – it’s all traffic where there is no referral string associated with the visit).

Yes, it’s gotten harder to win in Google. I’ve outlined some of the underlying reasons in my AI Survival Strategies article.

But when things get harder, the dumbest course of action is to give up.

If you lower your investment in SEO, guess what happens? You lose more Google traffic. This will then reinforce your preconceived notion of Google Zero, so you invest even less in SEO, and down the spiral goes until you’re dead in the water.

Your Google Zero prophecy has come true because you’ve made it come true.

In the meantime, competing websites that continued to invest in SEO will happily scoop up the clicks you’ve abandoned.

There is literally no sign that Google is in danger of losing its position as the largest source of traffic to the web. There is no other channel rising to take Google’s place. Choosing to abandon Google is a potentially catastrophic strategic error.

Consider yourselves warned.

More Resources:


This post was originally published on SEO For Google News.


Featured Image: Anton Vierietin/Shutterstock

https://www.searchenginejournal.com/google-zero-is-a-lie/568668/




Why Most Enterprise SEO Operating Models Are Structurally Broken via @sejournal, @billhunt

Enterprise SEO doesn’t usually fail because of bad tactics. It fails because the operating model itself makes success nearly impossible.

For years, organizations have treated SEO like a downstream marketing function, one that audits what others build, files tickets, and hopes development or content teams eventually implement recommendations. That model worked (barely) when search engines simply ranked pages. But in today’s environment, where visibility depends on structure, eligibility, entity clarity, and machine comprehension, SEO can no longer survive as a reactive service desk.

And yet, that’s exactly where most enterprises still put it. The uncomfortable truth is this: Many enterprise SEO teams are structurally set up to lose before they even start.

The Core Problem: SEO Lives Too Far Downstream

In most large organizations, SEO sits inside marketing and is treated like quality assurance. Product or brand teams define initiatives, content teams create assets, and development builds templates and pages. SEO is then asked to review everything after launch, when the most important decisions have already been made.

By that point, issues are easy to identify but hard to change. Tickets get filed, fixes compete for priority, and implementation happens late, if it happens at all. SEO becomes a cleanup crew for choices made elsewhere.

The problem is that “quality assurance” is a misnomer. True quality assurance exists upstream, shaping plans before they harden into execution. What SEO usually does is inspection after the fact, when the opportunity to influence structure has already passed.

A recent call illustrated this perfectly. The SEO team presented a report showing hundreds of the same issues repeated across four areas of the site. The action item was familiar: Each team was asked to “fix them,” much like the report that had been circulated the month before. What no one asked was the more important question: Why are the same issues appearing everywhere, and what in the workflow is creating them in the first place?

Instead of treating the situation as a systems failure, the conversation framed it as a volume problem. More fixes. More tickets. More effort.

This is where the upstream versus downstream framing becomes tangible. The real issue isn’t that teams aren’t fixing problems fast enough; it’s that something upstream is poisoning the water. As long as the source of contamination remains untouched, the same issues will continue to surface no matter how many times they’re cleaned up downstream.

The dynamic mirrors how prevention teams are often treated more broadly. Early warnings are raised and overridden as too cautious or slowing progress. Yet when visibility drops, traffic declines, or revenue is impacted, the same team is expected to reverse outcomes created by decisions they never influenced.

Modern search does not reward post-hoc inspection or emergency response. It rewards architecture that is built correctly from the start. Search performance today is shaped upstream by decisions around information structure, entity modeling, taxonomy, internal linking frameworks, data models, and how content depth aligns to intent decisions made long before traditional SEO teams are invited into the process.

As a result, SEO teams spend most of their time fighting symptoms instead of influencing causes.

The Illusion Of “SEO Integration”

Many enterprises believe they take SEO seriously because they have the trappings of an SEO program. There is a budget allocation, an SEO team, expensive auditing tools, and dashboards. There may even be multiple agencies involved, along with a significant backlog of tickets labeled “SEO.”

But resources are not the same thing as an integrated operating model. The issue isn’t effort; it’s how those resources are deployed.

What follows isn’t a single point of failure, but a set of recurring operating patterns. Each one reflects a different way organizations claim to integrate SEO while never giving it structural leverage.  The outcome is chronic underperformance that looks like a tactical problem, but is actually a structural one.

The Four Broken Enterprise SEO Models

After working with hundreds of global organizations, a consistent pattern emerges. Most enterprise SEO teams operate within one of four flawed structures. They look different on the surface, but they all produce the same outcome: reactive SEO with limited impact.

1. The Audit Factory

This is the most common model and fails at the point of prevention.

SEO runs crawls, identifies issues, produces reports, and prioritizes fixes. The team becomes exceptionally good at finding problems. What it never gets to do is prevent them. Because SEO has visibility but not authority, every finding depends on another team to act. Issues recur because root causes are never addressed. Development teams begin to view SEO as a backlog generator rather than a partner. SEO is rewarded for identifying issues, not for eliminating them.

The organization mistakes activity for impact.

2. The Ticket Desk

In this model, SEO functions like an internal help desk and fails at the point of delivery.

SEO has no built-in priority and no integration into release cycles. Influence depends on persuasion and clever project integration rather than a mandate. Over time, SEO becomes a beggar in the backlog. Tickets are filed in Jira. They enter queues already crowded with revenue-driving projects or executive pet initiatives. SEO work becomes one request among hundreds.

Implementation takes months. By the time fixes are deployed, the site has changed again.

3. The Local Islands

This is where I have the most experience in trying to change multinational organizations, where markets are like distant islands disconnected from the heart of the organization.

Central teams define organization-wide SEO standards, but local markets control content and execution. Local priorities override global requirements. The need to “deliver for their market means templates are resisted, shared infrastructure is avoided, with every region doing its own thing.

Implementation fragments due to varied infrastructure, lack of resources, and fundamental disagreements. Effort is duplicated across markets or conflicting based on the SEO knowledge of the agency or local team. All results in conflicting signals being sent to search engines, which will only be exponentially worse of a problem in the new AI environment.

4. The Orphaned Center Of Excellence

A Search Center of Excellence model looks great on paper, but living up to its potential is a challenge.

A typical SEO Center of Excellence is created to define standards, train teams, and share best practices. But the CoE often has no enforcement power. It doesn’t control templates, development standards, structured data policy, or workflows. Guidelines are published and quietly ignored. Speed and convenience win. SEO becomes “recommended,” not required.

The CoE becomes a library of forgotten best practices, and not the highly functional collaborative governing body it should be.

What All Broken Models Have In Common

Despite their differences, these operating models fail for the same structural reasons. SEO is reactive rather than embedded into the workflow and consciousness of the organization, brought in after decisions are made instead of participating in them. Execution depends on other teams with different priorities, while SEO is still measured on outcomes it doesn’t control. Authority is missing from the workflows that actually shape search performance, leaving SEO to advise on decisions that have already hardened.

As a result, SEO is treated less like infrastructure and more like compliance. This is why enterprise SEO so often feels frustratingly ineffective, not because the teams are weak, but because the organization handicaps them by design.

One consequence is rarely discussed. Experienced SEOs learn to recognize these patterns quickly, and many actively avoid enterprise roles altogether. Not because the work lacks importance, but because bureaucracy replaces progress and motion substitutes for action.

Why This Is Getting Worse In The AI Era

AI-driven search doesn’t introduce new problems so much as it magnifies existing structural weaknesses. In traditional search, damage could often be undone. Rankings recovered, pages were reindexed, and signals eventually recalibrated.

AI systems behave differently. They reward clean structure, clear entity definitions, consistent signals, deep topical coverage, and machine-readable relationships. Those qualities aren’t additive features that can be patched in later; they are properties of how a site and its underlying systems are built.

These weaknesses are not new, but they have been amplified by how search itself has evolved. As I explored in my previous Search Engine Journal article, “AI Search Changes Everything – Is Your Organization Built to Compete?”, AI-first search no longer surfaces brands based on rankings alone. It relies on structured understanding, entity representation, and organizational alignment. That shift makes structural integration critical because visibility in AI-driven ecosystems depends on how well internal systems and teams align with the way machines interpret and present information.

When an operating model prevents SEO from influencing those foundational elements, the impact extends beyond traditional SERPs. Visibility erodes across AI-generated answers, recommendations, and synthesized results, often without a clear recovery path.

Structure can’t be retrofitted into a system that was never designed to let SEO shape it.

The Real Takeaway

Enterprise SEO struggles are rarely tactical failures. They are organizational design failures disguised as execution problems. Most companies never built SEO into product workflows, development requirements, content planning, market rollouts, or governance structures. Instead, SEO was positioned as a review layer, brought in after decisions were already made.

Modern search punishes this model not through penalties, but through exclusion. Eligibility is determined upstream by structure, consistency, and machine-readable clarity, long before traditional SEO reviews take place. AI-driven systems don’t correct ambiguity after the fact; they synthesize only what they can confidently understand. When SEO is positioned as a downstream review layer, it loses the ability to influence those decisions, and visibility erodes quietly across answers, recommendations, and synthesized results with few clear recovery paths.

Coming Next In The Series

In the next article, I’ll outline what high-performing organizations do differently and introduce the embedded SEO operating model that shifts search from an audit function to a built-in enterprise capability.

Because SEO doesn’t fail from lack of effort. It fails due to a lack of structural integration.

And structure is something organizations can fix, if they’re willing to rethink where SEO actually belongs.

More Resources:


Featured Image: MR Chalee/Search Engine Journal

https://www.searchenginejournal.com/why-most-enterprise-seo-operating-models-are-structurally-broken/566075/




We’re Bringing The SEJ Newsroom To You, Live [Free Event] via @sejournal, @hethr_campbell

We’re bringing the SEJ newsroom to a screen near you.

On March 11 from 12–3pm ET, we’ve gathered our own search experts, alongside some very special guests, to help you master AI search visibility this year. This is SEJ Live, a new series we’ve been building behind the scenes, and I couldn’t be more excited to see it come to life.

Here’s why this matters right now: I’m seeing a huge disconnect between leadership and the marketing teams doing the work. Leadership wants performance yesterday, but those with their feet on the ground know that customer behaviors have changed. The metrics we used to rely on, the strategies our plans were built on, no longer tell the actual story. And strategies need to change.

SEJ Live is designed to help you bridge that gap.

Come, chat with others deep in these shifts, ask the questions your team is wrestling with, and see how these experts are making sense of AI-influenced search.

We’re all in the same boat, so let’s commiserate together…. And share our learnings like the search community is so well known for!

What We’re Covering

Session 1: Newsroom: 1 New AI Search Reality. 3 SEJ Leader Perspectives
Three SEJ leaders break down 2025’s AI search shifts from three angles. Loren Baker covers the business side, Matt Southern breaks down the news, and Shelley Walsh translates the impact on your content. The question for all three: what should experienced marketers do about it heading into Q2 26?

Session 2: Traffic Changes Or Measurement Gaps? Expand Your SEO KPIs For AI Search
AI search requires a new set of KPIs tailored to how discovery and conversion occur today. Learn which metrics accurately reflect AI-driven visibility and performance. Emily Popson, VP of Marketing at CallRail, shows you how to replace outdated metrics with KPIs aligned to modern search behavior.

Session 3: Why Answer Engines Should Be on Every CMO’s Strategic Agenda
And, I’m very excited to have our guest speaker Nikhil Lai, Principal Analyst at Forrester, join us. Nikhil is going to share identified shifts in search behavior, how different teams should align around these changes, and what’s next for answer engines and you.

Who Should Be There

We’re speaking to the CMOs, marketing directors, and search leaders who are past the “AI is coming” conversation and need the “here’s what to do right now” conversation.

Each session builds on the one before it, intended to spark changes you need to make in your own strategy.

This first SEJ Live is being thoughtfully planned to bring this community and conversation to the forefront. All three sessions are live with full Q&A at the end, so bring your hardest questions. The presentations are going to be insightful, and I expect the chat is going to be lit.

Tell your peers and your team. If you’re responsible for marketing performance, reporting to leadership, or building your 2026 growth plan, these will be three hours well spent.

I hope you’ll join us.

Save Your Spot

P.S. The team is talking about doing a special AMA for any questions we can’t answer during the event. Another benefit of joining us live … it’s the only way you can submit your question.

https://www.searchenginejournal.com/sej-live-with-newsroom-free-event-march11/568089/