Google Bans Back Button Hijacking, Agentic Search Grows – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse: updates affect what Google considers spam, what happens when you report it, and what agentic search looks like in practice.

Here’s what matters for you and your work.

Google’s New Spam Policy Targets Back Button Hijacking

Google added back button hijacking to its spam policies, with enforcement beginning June 15. The behavior is now an explicit violation under the malicious practices category.

Key facts: Back button hijacking occurs when a site interferes with browser navigation and prevents users from returning to the previous page. Pages engaging in the behavior face manual spam actions or automated demotions.

Why This Matters

Google called out that some back button hijacking originates from included libraries or advertising platforms, which means the liability sits with the publisher even when the behavior comes from a vendor.

You have two months to audit every script running on your site, including ad libraries and recommendation widgets you didn’t write yourself.

Sites that receive a manual action after June 15 can submit a reconsideration request through Search Console once the offending code is removed.

What SEO Professionals Are Saying

Daniel Foley Carter, SEO Consultant, summed up the community reaction on LinkedIn:

“So basically, that spammy thing you do to try and stop users leaving? Yeah, don’t do it.”

Manish Chauhan, SEO Head at Groww, added on LinkedIn that he was:

“glad this is being addressed. It always felt like a short-term hack for pageviews at the cost of user trust.”

Read our full coverage: New Google Spam Policy Targets Back Button Hijacking

Spam Reports May Now Trigger Manual Actions

Google updated its report-a-spam documentation on April 14 to say user submissions may now trigger manual actions against sites found violating spam policies. The previous guidance said spam reports were used to improve spam detection systems rather than to take direct action.

Key facts: Google may use spam reports to take manual action against violations. If Google issues a manual action, the report text is sent verbatim to the reported website through Search Console.

Why This Matters

Google now states that spam reports can be used to initiate manual actions, making reports explicitly part of its enforcement process in official documentation.

This also raises concerns about potential abuse, as grudge reports and competitor sabotage may become more appealing when reports have a tangible impact. Therefore, the true test will be the quality of reports that Google actually considers.

What SEO Professionals Are Saying

Gagan Ghotra, SEO Consultant, wrote on LinkedIn about why the change may lead to better reports:

“Now spam reports have direct relation to Google issuing manual actions against domains. Google announced if there is a spam report from a user and based upon that report Google decide to issue manual action against a domain then Google will just send the user submitted content in report to the site owner (Search Console – Manual Action report) and will ask them to fix those things. Seems like Google was getting too many generic spam reports and now as the incentive to report are aligned. That’s why I guess people are going to submit reports which have a lot of relevant information detailing why/how a specific site is violating Google’s spam policies.”

Read Roger Montti’s full coverage: Google Just Made It Easy For SEOs To Kick Out Spammy Sites

Agentic Restaurant Booking Expands In AI Mode

Google expanded agentic restaurant booking in AI Mode to additional markets on April 10, including the UK and India. Robby Stein, VP of Product for Google Search, announced the rollout on X.

Key facts: Searchers can describe group size, time, and preferences to AI Mode, which scans booking platforms simultaneously for real-time availability. The booking itself is completed through Google partners rather than directly on restaurant websites.

Why This Matters

Restaurant booking shows how task completion within search works. For local SEOs and marketers, traffic patterns shift: users now often stay within Google during discovery, with bookings routed through partners.

This depends on Google booking partners, which may limit visibility for restaurants outside those platforms, making presence on Google-supported booking sites more important than the restaurant’s own website. This model may or may not extend to other experiences.

What SEO Professionals Are Saying

Glenn Gabe, SEO and AI Search Consultant at G-Squared Interactive, flagged the rollout on X:

I feel like this is flying under the radar -> Google rolls out worldwide agentic restaurant booking via AI Mode. TBH, not sure how many people would use this in AI Mode versus directly in Google Maps or Search (where you can already make a reservation), but it does show how Google is moving quickly to scale agentic actions.

Aleyda Solís, SEO Consultant and Founder at Orainti, noted a key limitation in a LinkedIn post:

“Google expands agentic restaurant booking in AI Mode globally: You still need to complete the booking via Google partners though.”

Read Roger Montti’s full coverage: Google’s Task-Based Agentic Search Is Disrupting SEO Today, Not Tomorrow

Theme Of The Week: Google Gets Specific

What counts as spam, what happens when spam gets reported, and what agentic search looks like all got clearer definitions this week.

Back button hijacking becomes a named violation with an enforcement date. Google’s documentation now says spam reports may be used for manual actions, not just fed into detection systems. Agentic search becomes a live product for restaurant reservations in specific markets rather than a talking point about the future.

Now, the compliance work, reporting mechanics, and agentic experience are all clearly understood enough to be tracked directly, instead of just forecasted.

Top Stories Of The Week:

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

https://www.searchenginejournal.com/seo-pulse-google-targets-back-button-hijacking-agentic-search-grows/572282/




Google Bans Back Button Hijacking, Agentic Search Grows – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse: updates affect what Google considers spam, what happens when you report it, and what agentic search looks like in practice.

Here’s what matters for you and your work.

Google’s New Spam Policy Targets Back Button Hijacking

Google added back button hijacking to its spam policies, with enforcement beginning June 15. The behavior is now an explicit violation under the malicious practices category.

Key facts: Back button hijacking occurs when a site interferes with browser navigation and prevents users from returning to the previous page. Pages engaging in the behavior face manual spam actions or automated demotions.

Why This Matters

Google called out that some back button hijacking originates from included libraries or advertising platforms, which means the liability sits with the publisher even when the behavior comes from a vendor.

You have two months to audit every script running on your site, including ad libraries and recommendation widgets you didn’t write yourself.

Sites that receive a manual action after June 15 can submit a reconsideration request through Search Console once the offending code is removed.

What SEO Professionals Are Saying

Daniel Foley Carter, SEO Consultant, summed up the community reaction on LinkedIn:

“So basically, that spammy thing you do to try and stop users leaving? Yeah, don’t do it.”

Manish Chauhan, SEO Head at Groww, added on LinkedIn that he was:

“glad this is being addressed. It always felt like a short-term hack for pageviews at the cost of user trust.”

Read our full coverage: New Google Spam Policy Targets Back Button Hijacking

Spam Reports May Now Trigger Manual Actions

Google updated its report-a-spam documentation on April 14 to say user submissions may now trigger manual actions against sites found violating spam policies. The previous guidance said spam reports were used to improve spam detection systems rather than to take direct action.

Key facts: Google may use spam reports to take manual action against violations. If Google issues a manual action, the report text is sent verbatim to the reported website through Search Console.

Why This Matters

Google now states that spam reports can be used to initiate manual actions, making reports explicitly part of its enforcement process in official documentation.

This also raises concerns about potential abuse, as grudge reports and competitor sabotage may become more appealing when reports have a tangible impact. Therefore, the true test will be the quality of reports that Google actually considers.

What SEO Professionals Are Saying

Gagan Ghotra, SEO Consultant, wrote on LinkedIn about why the change may lead to better reports:

“Now spam reports have direct relation to Google issuing manual actions against domains. Google announced if there is a spam report from a user and based upon that report Google decide to issue manual action against a domain then Google will just send the user submitted content in report to the site owner (Search Console – Manual Action report) and will ask them to fix those things. Seems like Google was getting too many generic spam reports and now as the incentive to report are aligned. That’s why I guess people are going to submit reports which have a lot of relevant information detailing why/how a specific site is violating Google’s spam policies.”

Read Roger Montti’s full coverage: Google Just Made It Easy For SEOs To Kick Out Spammy Sites

Agentic Restaurant Booking Expands In AI Mode

Google expanded agentic restaurant booking in AI Mode to additional markets on April 10, including the UK and India. Robby Stein, VP of Product for Google Search, announced the rollout on X.

Key facts: Searchers can describe group size, time, and preferences to AI Mode, which scans booking platforms simultaneously for real-time availability. The booking itself is completed through Google partners rather than directly on restaurant websites.

Why This Matters

Restaurant booking shows how task completion within search works. For local SEOs and marketers, traffic patterns shift: users now often stay within Google during discovery, with bookings routed through partners.

This depends on Google booking partners, which may limit visibility for restaurants outside those platforms, making presence on Google-supported booking sites more important than the restaurant’s own website. This model may or may not extend to other experiences.

What SEO Professionals Are Saying

Glenn Gabe, SEO and AI Search Consultant at G-Squared Interactive, flagged the rollout on X:

I feel like this is flying under the radar -> Google rolls out worldwide agentic restaurant booking via AI Mode. TBH, not sure how many people would use this in AI Mode versus directly in Google Maps or Search (where you can already make a reservation), but it does show how Google is moving quickly to scale agentic actions.

Aleyda Solís, SEO Consultant and Founder at Orainti, noted a key limitation in a LinkedIn post:

“Google expands agentic restaurant booking in AI Mode globally: You still need to complete the booking via Google partners though.”

Read Roger Montti’s full coverage: Google’s Task-Based Agentic Search Is Disrupting SEO Today, Not Tomorrow

Theme Of The Week: Google Gets Specific

What counts as spam, what happens when spam gets reported, and what agentic search looks like all got clearer definitions this week.

Back button hijacking becomes a named violation with an enforcement date. Google’s documentation now says spam reports may be used for manual actions, not just fed into detection systems. Agentic search becomes a live product for restaurant reservations in specific markets rather than a talking point about the future.

Now, the compliance work, reporting mechanics, and agentic experience are all clearly understood enough to be tracked directly, instead of just forecasted.

Top Stories Of The Week:

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

https://www.searchenginejournal.com/seo-pulse-google-targets-back-button-hijacking-agentic-search-grows/572282/




Your AI Visibility Strategy Doesn’t Work Outside English via @sejournal, @DuaneForrester

This series has been written in English, tested in English, and grounded in research conducted primarily in English. Every framework discussed here (vector index hygiene, cutoff-aware content calendaring, community signals, machine-readable content APIs) was conceived by an English-speaking practitioner, stress-tested against English-language queries, and validated against benchmarks that, as this article will show, are themselves English-weighted by design. That is not a disclaimer, but it is the central problem this article is about.

The AI visibility discourse at large carries the same limitation. One 2024 study analyzing AI evaluation datasets found that over 75% of major LLM benchmarks are designed for English tasks first, with non-English testing treated as an afterthought. The strategies built on top of those benchmarks inherit the same bias.

Enterprise brands are not the villains in this story. Translation-first search content strategies produced imperfect results globally, but markets had learned to live with the nuanced failures. Traditional search indexed what existed, ranked it imperfectly, and the degradation was quiet enough that no one filed a complaint. LLMs raise the bar in a way search never did, and the reason is structural, which is what the rest of this article examines.

The Platform Map

Before optimizing AI visibility in any market, a brand needs to answer a question the English-centric visibility discourse rarely asks: Which AI system are your target customers actually using? The answer varies more dramatically by region than most global marketing teams have accounted for.

In China, a market of 1.4 billion people, ChatGPT and Gemini are not accessible. The AI visibility contest happens entirely within a separate ecosystem. Baidu’s ERNIE Bot crossed 200 million monthly active users in January 2026, and Baidu holds the leading position in AI search market share, according to Quest Mobile. But Baidu is no longer operating in a vacuum. ByteDance’s Doubao surpassed 100 million daily active users by end of 2025, and Alibaba’s Qwen exceeded 100 million monthly active users in the same period. A brand’s English-optimized content architecture is not underperforming in this ecosystem. It simply does not exist there.

South Korea tells a different version of the same story. Naver captured 62.86% of the South Korean search market in 2025 (more than double Google’s share) and since March 2025 has been deploying AI Briefing, a generative search module powered by its proprietary HyperCLOVA X model, with plans for up to 20% of all Korean searches to surface AI-generated answers by end of 2025. Naver is also a closed ecosystem where results route to internal Naver properties, not necessarily the open web. Western brands whose structured data and llms.txt implementation was designed for open-web crawlers are operating with architecture that was never built to reach Naver’s retrieval layer. China and Korea alone account for well over a billion AI-active users on platforms a standard global visibility strategy does not touch.

The Map Is Far Bigger Than We’re Drawing

Those two markets are the ones that get cited because their scale is impossible to ignore. But the platforms being built outside the English-dominant orbit extend considerably further, and the breadth of what has launched in the last two years deserves attention on its own terms.

Europe

  • France – Mistral AI’s Le Chat was the No. 1 free app in France after its February 2025 launch; the French military awarded Mistral a deployment contract through 2030, and France committed €109 billion in AI infrastructure investment at the 2025 AI Action Summit.
  • Germany – Aleph Alpha trains in five languages with EU regulatory compliance by design, backed by Bosch and SAP.
  • Italy – Velvet AI (Almawave/Sapienza Università di Roma) is built specifically for Italian language and cultural context, designed for EU AI Act compliance from inception.
  • European Union – The OpenEuroLLM initiative, launched in 2025, is developing a family of open LLMs covering all 24 official EU languages.
  • Switzerland – Apertus (EPFL/ETH Zurich/Swiss National Supercomputing Centre, September 2025) supports over 1,000 languages with 40% non-English training data, including Swiss German and Romansh.

Middle East

  • UAE/Abu Dhabi – Falcon (Technology Innovation Institute) ranges from 7B to 180B parameters; Falcon Arabic, launched May 2025, outperforms models up to 10 times its size on Arabic benchmarks.
  • Saudi Arabia – HUMAIN, backed by the sovereign wealth fund, is framed as a full-stack national AI ecosystem.
  • South and Southeast Asia
  • India – Bhashini (Ministry of Electronics and IT) has produced over 350 AI-powered language models; BharatGen, launched June 2025, is India’s first government-funded multimodal LLM.
  • Singapore / Southeast Asia – SEA-LION (AI Singapore) supports 11 Southeast Asian languages; Malaysia, Thailand, and Vietnam have deployed MaLLaM, OpenThaiGPT, and GreenMind-Medium-14B-R1, respectively.

Latin America

  • 12-country consortium – Latam-GPT launched September 2025, led by Chile’s CENIA with over 30 regional institutions, trained on court decisions, library records, and school textbooks, with an initial Indigenous language tool for Rapa Nui.

Africa/Eastern Europe

  • Sub-Saharan Africa – Lelapa AI’s InkubaLM supports Swahili, Yoruba, IsiXhosa, Hausa, and IsiZulu; Nigeria launched a national multilingual LLM in 2024.
  • Russia/Ukraine – GigaChat (Sberbank) is the dominant domestically deployed Russian AI assistant; Ukraine announced a national LLM in December 2025, built with Kyivstar and trained on Ukrainian historical and library data.

This list is not really meant to be exhaustive, but it is meant to be disorienting.

Every entry above represents a retrieval ecosystem, a cultural signal hierarchy, and a community proof-point structure that a North American-optimized AI visibility strategy does not reach. But the more important observation is about which direction these models were built in.

The old content strategy model was centrifugal: the brand sits at the center, creates content, translates it, and pushes it outward into markets. Traditional search accommodated this because crawlers are indifferent to cultural authenticity: they index what is there. The imperfect results were tolerated because most markets had no better alternative.

These regional models were built in the opposite direction. A government mandate, a national corpus, a specific cultural identity, a language’s syntactic logic, that is the origin point. The model was trained on what that place knows about itself. A brand’s translated content arrives as a foreign object with no parametric presence, carrying the syntactic and cultural signatures of its origin language. Translation does not retrofit cultural fit into a model that was built without you in it.

And this does not stop at the English/non-English boundary. Even within English, regional identity shapes what a model treats as native. Irish English carries vocabulary – craic, gas, giving out, that exists nowhere else. Australian idiom, Singaporean English, Nigerian Pidgin all have distinct fingerprints. A U.S. brand’s content may read as subtly foreign to a model trained predominantly on British or Irish corpora. The direction of the problem is the same regardless of whether the language is technically shared. So often these aren’t just words. They’re compressed cultural signals. A literal translation gives you the category, but often strips out aspects like intensity, intent, emotional tone, social expectation, or shared history.

The Embedding Quality Gap

The reason translation does not solve this is not just strategic. It’s structural, and it lives in the embedding layer.

Retrieval in AI systems depends on semantic similarity calculations. Content is encoded as a vector, queries are encoded as vectors, and the system identifies matches by measuring distance in that vector space. The accuracy of those matches depends entirely on how well the embedding model represents the language in question. Embedding models are not language-neutral. (I think of this as a kind of cultural parametric distance, or a language vector bias issue.)

The most rigorous current evidence comes from the Massive Multilingual Text Embedding Benchmark (MMTEB), published at ICLR 2025. Even across more than 250 languages and 500 evaluation tasks, the benchmark’s own task distribution is skewed toward high-resource languages. The benchmarks practitioners use to evaluate whether their embedding architecture works in other languages are themselves English-weighted. A leaderboard score that looks reassuring may be measuring performance on a test that does not represent the language actually in use.

The structural cause is well documented: the Llama 3.1 model series, positioned at release as state-of-the-art in multilingual performance, was trained on 15 trillion tokens, of which only 8% was declared non-English, and this is not just a Llama-specific problem. It reflects the composition of the large-scale web corpora used to train most foundation models, where English content is overrepresented at every stage: crawl filtering, quality scoring, and final dataset construction. Research comparing English and Italian information retrieval performance, published May 2025, found that while multilingual embedding models bridge the general-domain gap between the two languages reasonably well, performance consistency decreases substantially in specialized domains; precisely the domains enterprise brands operate in.

The embedding gap does not produce obvious errors. It produces quietly degraded retrieval and content that should surface does not, without any visible failure signal. The dashboards stay green. The gap only becomes visible when someone tests in the actual market language.

When Translation Isn’t Enough

Below the embedding layer sits a problem that is harder to instrument: Cultural context shapes what a model treats as relevant in the first place. Research published in 2024 by Cornell University researchers found that when five GPT models were asked questions from a widely used global cultural values survey, responses consistently aligned with the values of English-speaking and Protestant European countries. The models were not asked to translate anything; they were asked to reason, and their default frame of reference was shaped by the cultural composition of their training data.

Consider a brand headquartered outside France, but operating in France. Their content, even if professionally translated, was likely written by non-French-speaking teams with non-French-market authority signals: the institutional citations, the comparison frameworks, the professional register. Mistral was built on French corpora, with French institutional relationships and French media partnerships as its baseline for what counts as authoritative. A Canadian brand’s French content, for example, is tolerated by a French-speaking human reader. Whether it clears the threshold for a model trained on native French content as its definition of relevance is a different question entirely.

The community signals argument from the previous article in this series applies here with a regional dimension. The platforms that drive AI retrieval through community consensus differ by market. In China, Xiaohongshu now processes approximately 600 million daily searches (nearly half of Baidu’s query volume) with over 80% of users searching before purchasing and 90% saying social results directly influence their decisions. The community signals that matter for AI visibility in China are not the ones a strategy built around English-language review platforms is generating.

A brand may have excellent English-language retrieval infrastructure, strong community signals in Western markets, and a well-architected machine-readable content layer, and still be effectively invisible in Korea, structurally disadvantaged in Japan, and culturally misaligned in Brazil. This is not a failure of execution as much as a failure of assumption about which direction the optimization flows.

What Enterprise Teams Should Do

An honest note before the framework: The documented, auditable evidence base for enterprise-level non-English AI visibility strategies does not yet exist in a form that holds up to scrutiny. Work is being done, but a citable case study requires a defined baseline, a measurable intervention, a controlled timeframe, and independently validated results. A practitioner’s assertion that their work applies to your situation is not that. The absence of rigorous case data is a reason to build with intellectual honesty about what is validated versus directional, not a reason to wait. With that in mind, here’s what you can do today:

Audit AI visibility per language and per market, not globally. Query performance in English tells you nothing about performance in Japanese, and performance with global AI platforms tells you nothing about performance inside Naver’s AI Briefing. The audit needs to happen at the market level, using queries constructed in the local language by native speakers, not translated from English.

Map the AI platforms that matter in each target market before optimizing. The list in the previous section is a starting point, not a permanent reference, as this landscape shifts quarterly. Optimization work (structured data, content APIs, entity signals) needs to be built toward the platforms that actually serve each market.

Build localized content, not translated content. The four-layer machine-readable architecture discussed in this series applies in every language. But a translated version of an English content API is not a localized one. Entity relationships, cultural authority signals, and community proof points all need to be rebuilt for local context. The optimization direction is inward from the market, not outward from the brand.

Accept that English-English is not a single market either. The same structural logic applies within English. A US brand’s content may carry American syntactic and cultural signatures that read as subtly foreign to models trained on predominantly British, Irish, or Australian corpora. Regional English is not a rounding error. It is evidence of the same underlying principle operating on a smaller scale.

Accept that a single global AI visibility strategy is insufficient. The frameworks developed in English, including the ones in this series, are a starting point for one slice of the global market. Extending them globally requires treating each major market as a distinct optimization problem: different platforms, different embedding architectures, different cultural retrieval logic, and a different direction of trust.

Image Credit: Duane Forrester

There is real work to be done. If we step back and look at the big picture again, it’s clear that markets that were once willing to live with the nuanced failures of translation-first content strategies are increasingly operating on platforms built to serve them natively, and that gap is widening. You know I like to name things when the industry hasn’t gotten there yet so here it is: this is the Language Vector Bias problem. And the brands that start closing it now are not catching up to a solved problem. They are getting ahead of the most consequential visibility gap we aren’t really talking about.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Billion Photos/Shutterstock; Paulo Bobita/Search Engine Journal

https://www.searchenginejournal.com/your-ai-visibility-strategy-doesnt-work-outside-english/571742/




Machine-First Architecture: AI Agents Are Here And Your Website Isn’t Ready, Says NoHacks Podcast Host via @sejournal, @theshelleywalsh

AI agents are already here. Not as a concept, not as a demo, but shipping inside browsers used by billions of people. Every major tech company has launched either a browser with AI built in or an extension that acts on your behalf.

Anthropic’s Claude for Chrome can navigate websites, fill forms, and perform multi-step operations on your behalf. Google announced Gemini in Chrome with agentic browsing capabilities, including auto browse, which can act on webpages for you. OpenClaw, the open-source AI agent, connects large language models directly to browsers, messaging apps, and system tools to execute tasks autonomously.

For more understanding about optimizing for agents, I spoke to Slobodan Manic, who recently wrote a five-part series on optimizing websites for AI agents. His perspective sits at the intersection of technical web performance and where AI agent interaction is actually heading.

From Slobodan Manic’s testing, almost every website is structurally broken for this shift.

“It started with us going to AI and asking questions. And now AI is coming to us and meeting us where we are. From my testing, I noticed that websites are nowhere near being ready for this shift because structurally almost every website is broken.”

The Single Biggest Thing That’s Changed

I started by asking Slobodan what’s changed in the last six to nine months that means SEOs need to pay attention to AI agents right now.

“Every major tech company has launched either a browser that has AI in it that can do things for you or some kind of extension that gets into Chrome. Claude has a plugin for Chrome that can do things for you, not just analyze web pages, summarize web pages, but actually perform operations.”

When ChatGPT first launched in 2023, making AI widely accessible, in parallel with how we started typing basic queries in search engines 25 years ago, we asked AI questions. We are now becoming more sophisticated and fluid with our prompting as we realize that AI can do so much more than [write me an email to politely decline an invitation].

Agents represent an even bigger shift to a different dynamic, where AI can complete tasks on our behalf and run complex systems. [Check my emails and delete any that are spam, sort them into a priority group, and surface what needs my immediate attention and provide a qualified response to anything on a basic query, plus make appointments in my calendar for any meeting invites].

Understanding and taking advantage of the possibilities is something we are all trying to figure out right now. What we should be aware of is that most websites aren’t built or ready for this agentic world.

Websites Are Becoming Optional, Or Are They?

I have a theory that brand websites are becoming hubs, the central point that connects all of your content assets online. But Slobodan has gone further. He’s written about websites becoming optional for the end user, with pages built by machines for machines and the interaction happening through closed system interfaces. I asked him to expand on that vision and what kind of timeframe we’re realistically looking at.

“First I’ll say that this is not fully happening today. This is still near to mid future. This is not March 2026,” he clarified. But the signals are concrete.

“Google had a patent granted in January that will let them use AI to rewrite the landing page for you if your landing page is not good enough. And then we have all these other companies including Google that announced Gemini browsing for you inside Chrome. So we have an end-to-end AI system that does everything while humans just wait for results.”

He was careful not to overstate it. People still like to browse, read, and compare things. Websites aren’t disappearing.

“Just the same way as mobile traffic has not killed desktop traffic even if it’s taken a bigger share of traffic overall, higher percentage of overall traffic while the desktop traffic is staying flat in terms of absolute numbers, I think this is another lane that will open where things will be happening without a human being involved in every step.”

His timeline for this: “Within a year we can have this become a reality. Not majority, but if Google starts rewriting landing pages using AI, we will see this happening probably 2027, if not sooner.”

When Checkout Becomes A Protocol

Slobodan has written that checkout is becoming a protocol, not a page. If an AI agent can buy on your behalf without ever loading a brand’s website, I asked, “What does that mean for how brands build trust and differentiate when the customer never sees their site?”

“If you’re building trust in a checkout page, you’re doing it wrong. Let’s start there. That I firmly believe. This is not to do with AI. This was never the right place to build trust,” he responded.

Slobodan pointed to every Shopify checkout page that looks identical. “There’s no trust built there. It’s just a machine-readable page that looks the same for everyone, for every brand. You’re supposed to be doing your job before the user needs to pay you.”

This is where he referenced Jono Alderson, and the concept of upstream engineering. “Moving upstream and doing work there and not on the website is the only way to move forward for anyone whose job is optimizing websites. That’s SEO, that’s CRO, that’s content, that’s anyone doing any kind of website work.”

He best summarized by saying “Your website is a part of the equation. Your website is not the equation. And that’s the biggest structural shift that people need to make to survive moving forward.”

What SEOs And Brands Should Actually Do Now

I asked what SEOs and brands can practically start doing to transition over the next year. His answer reframed how we should think about the website itself.

“If your website was your storefront, and it was for decades, people come to you, people do business there. It needs to be a warehouse and a storefront moving forward or you’re not going to survive. Simple as that.”

“We had all those bookstores that were selling books in the ’90s and then Amazon shows up and then you need to be a warehouse. You need to exist in two planes at the same time for the near future at least. So focusing only on your website is the most wrong thing you can do moving forward.”

His main area of focus right now is what he calls machine-first architecture. The principle is to build for machines before you build for humans.

“You don’t build your website for humans until you’ve built it for machines. When you’re working on a product page, there’s no Figma, there’s no design, there’s no copy. You start with your schema. What is your schema supposed to say? What is the meaning of the page? You start with the meaning and then from that build into a web page as it’s built for humans.”

He compared it directly to the mobile-first shift. “That did not mean no desktop. That meant do the more difficult version of it first and then do the easy thing. Trust me, it’s a lot more complicated to add meaning and structure to a page that’s already been designed than to do it the other way.”

And it extends beyond the website. “If you’re saying something on your website, you better check all of your profiles everywhere online, what people are saying about you. It’s everything everywhere all at once. But this is what optimization has become and what it needs to be.”

I also put to him the argument that optimizing for LLMs is fundamentally different from SEO. His response was unequivocal.

“Hard disagree. The hardest possible disagree. If you were doing things the right way, working on the foundations and checking every box that has to be checked, it’s not different at all.”

Where he sees a difference is in the speed of consequences. “With AI in the mix, you just get exposed much faster and the consequences are much greater. There’s nothing different other than those two things.”

This echoed something I’ve felt strongly. The cycle is moving more quickly, but there’s so much similarity with what happened at the foundation of this industry 25 to 30 years ago, which I raised in my SEO Pioneers series. We’re feeling our way through in the same way. And Slobodan agreed.

“They figured this out once and maybe we should ask them how to figure it out again.”

Vibe Coding Is A Trap, Deep Work Is The Moat

For my last question, I put it to Slobodan that he’s said vibe coding is a trap and deep work is the only moat left. For the SEO practitioner feeling overwhelmed, what’s the one thing they should actually do this week?

“It’s really the foundations. I hate to give the boring answer, but it’s really fixing every single foundational thing that you have on your website or your website presence.”

He’s watched the industry chase one shiny tool after another. “There’s always a new shiny toy to work on while your website doesn’t work with JavaScript disabled. Just ignore all of that until you’ve fixed every single broken foundation you have on your website.”

On vibe coding specifically, he was precise: “I don’t like the term vibe coding. It just suggests that you have no idea what you’re doing and you’re happy about it. That’s the way that sounds to me. The concept of AI-assisted coding, it’s there. It’s great. It’s not going away.”

“But just focus on what you should be doing first before you use AI to do it faster.”

What resonated with me is how well this applies to writing, too. AI is brilliant at confidently producing a draft that, at first glance, looks great. But when you actually read it, you realize it’s just somebody confidently talking nonsense.

Slobodan nailed the core problem: “You need to know what good is and what good looks like. Because AI will always give you something. If you don’t know enough about that specific thing, it will always look good from the outside. And there’s a reason why everyone is okay with vibing everything except for their own profession, because they try it and they see that the results are just horrific.”

Build For Machines First, Everything Else Follows

The one thing to take away from this conversation is to build for machines first, then humans. Not because human user experience won’t matter, but because getting the machine layer right first makes the human layer better.

Your website is no longer the only version of your business that people, or agents, will encounter. The brands that treat it as part of a wider ecosystem rather than the whole ecosystem are the ones that will come through this transition in the strongest position.

Watch the full video interview with Slobodan Manic here, or on YouTube.

[embedded content]

Thank you to Slobodan for sharing his insights and being my guest on IMHO.

More Resources:


This post was originally published on Shelley Edits.


Featured Image: Shelley Walsh/Search Engine Journal

https://www.searchenginejournal.com/machine-first-architecture-ai-agents-are-here-and-your-website-isnt-ready-says-nohacks-podcast-host/571898/




Google’s Patent On Autonomous Search Results via @sejournal, @martinibuster

The United States Patent Office recently published Google’s continuation on a patent for a search system that detects when there is no satisfactory answer for a query and waits to automatically deliver the answer when it becomes available.

Search And AI Assistant

The patent, published in February 2026, is a continuation of an older patent, with the main changes being to apply this patent within the context of an AI assistant. The invention describes solving the problem of answering a question when no actual answer is available at the time a user makes the query. What it does is waits until there’s a satisfactory answer, at which point it circles back to the user with the answer, without them having to ask again.

The patent is titled, Autonomously providing search results post-facto, including in assistant context. Although the patent mentions quality thresholds, those thresholds are defined in the sense of whether the answer meets the user’s needs.

The patent describes six scenarios that would trigger the invention:

  1. When no search results meet defined quality or authoritative-answer criteria.
  2. When results exist but fail to provide a definitive or authoritative answer that satisfies those criteria.
  3. When no results meet quality criteria because the information is not yet available.
  4. When a query seeks a specific answer and no result satisfies the required criteria.
  5. When a resource later satisfies the defined criteria after previously lacking required information.
  6. When a previously available resource is refined or updated so that it now meets the criteria.

Useful And Complete Answers

Google’s patent says that the invention is a solution for times when there is no useful or complete answers because the information does not yet exist or is not good enough, forcing users to keep searching repeatedly.

The system checks if results meet:

  • A quality standard
  • Authoritativeness standard
  • Or a completeness standard.

If the current answers don’t meet those standards, the system will store the query and monitor for new or updated information. Once it becomes available it will send the results to the user later without them searching again.

Follow-Up Questions Are Not Necessary

What is novel about the invention is that it enables follow-up delivery of results after the original query without requiring a new follow-up questions. It also surfaces search results proactively in notifications or assistant conversations.

At a later time, when new or updated information becomes available that satisfies the criteria, the system proactively delivers that information to the user. This delivery can occur through notifications, within an unrelated interaction, or during a later conversation with an automated assistant.

The system may also optionally notify the user that no good results are currently available and ask if they want to be informed when better results appear.

What this system does is it transforms search from a one-time, user-initiated action into a persistent, ongoing process where the system continues working in the background and updates the user when meaningful information becomes available.

Cross-Device Continuity

An interesting feature of this invention is that it can reach out to the user across multiple devices.

Here is where it’s outlined:

[0012] In some implementations, the query is received on an additional computing device that is in addition to the computing device for which the content is provided for presentation to the user.”

This capabiilty is highlighed again in section [0067]:

“For example, the content may be provided for presentation to the user via the same computing device the user utilized to submit the query and/or via a separate computing device.”

It can also go cross-device as a visual and/or audible output across devices and in the form of an automated assistant, and can present the information when the user is interacting with the automated assistant in a different context, describing an “ecosystem” of devices.

Lastly, the patent explains that the information can be surfaced when the user is interfacing with the automated assistant in a completely different context:

[0040]”…the content may be provided for presentation to the user via the same computing device the user utilized to submit the query and/or via a separate computing device. The content may be provided for presentation in various forms. For example, the content may be provided as a visual and/or audible push notification on a mobile computing device of the user, and may be surfaced independent of the user again submitting the query and/or another query.

Also, for example, the content may be presented as visual and/or audible output of an automated assistant during a dialog session between the user and the automated assistant, where the dialog session is unrelated to the query and/or another query seeking similar information.”

Takeaways

The patent (Autonomously providing search results post-facto, including in assistant context) is in line with Google’s vision of tasked-based agentic search, where AI assistants help users accomplish things. This patent could be applied to an AI agent that is asked for tickets to an event when the tickets aren’t yet available. Or it could be applied to making restaurant reservations when the reservations when the dates open up. Both of those scenarios are related to task-based agentic search (TBAS)

Here are seven takeaways:

  1. The system stores data associated with the user about unresolved queries, allowing it to track unanswered information needs over time rather than treating each search as a one-off event.
  2. It delivers results within future interactions, including unrelated assistant conversations, not just through standalone notifications.
  3. The notifications can happen across an ecosystem of devices.
  4. A lack of results is defined by failing to meet quality criteria, which can be the absence of information, the answer not being available yet, or the answer is not available from authoritative sources.
  5. The system focuses on queries that seek specific answers, rather than general informational searches.
  6. It supports cross-device continuity, enabling a query on one device to be fulfilled later on another.
  7. The design reduces repeated searches by eliminating the need for users to check back, then autonomously circling back when the information is available.

Featured Image by Shutterstock/uyabdami

https://www.searchenginejournal.com/googles-patent-on-autonomous-search-results/572216/




The AI Slop Loop via @sejournal, @lilyraynyc

Last year, after spending a few days at a work summit in Austria, I asked Perplexity for the latest news related to SEO and AI search. It responded with details about a supposed “September 2025 ‘Perspective’ Core Algorithm Update” that Google had just rolled out, emphasizing “deeper expertise” and “completion of the user journey.”

It sounded plausible enough … if you don’t live and breathe Google core updates. Unfortunately for Perplexity, I do.

I knew instantly that this information wasn’t right. For one, Google hasn’t named core updates in years. It also already had SERP features called “Perspectives.” And if a core update had actually rolled out while I was away, I would’ve been flooded with messages. So I checked Perplexity’s sources … and, surprise! Both citations came from made-up, AI-generated slop on a couple of SEO agency blogs, confidently fabricating details about an algorithm update that never actually happened.

Like a bad game of telephone, this fake SEO news spread across multiple websites – likely driven by AI systems scanning and regurgitating information regardless of accuracy, all in the race to publish and scale “fresh” content. This is how we end up with this mess:

Image Credit: Lily Ray

This bad information reinforces itself to become the official narrative. To this day, you can ask an LLM of your choice (including ChatGPT, AI Mode, and AI Overviews) about the September 2025 “Perspectives” update, and they will confidently answer with information about how it “fundamentally shifted how search results are ranked:

Image Credit: Lily Ray

Or that it “shifted what ‘good content’ actually means in practice.

Image Credit: Lily Ray

The problem is: the “September 2025 “Perspectives” update never happened. It never affected rankings. It never shifted anything about good content. Because it doesn’t actually exist.

Ironically, when you go on to probe the language model about this, it seems to know this is the case:

Image Credit: Lily Ray

I tweeted about this incident shortly after it happened, which got the CEO of Perplexity’s attention; he tagged his head of search in the tweet comments.

Screenshot from X, April 2026

This isn’t a one-off incident. It’s a pattern I’ve seen countless times in AI search responses, especially on topics related to SEO and AI search (GEO/AEO). And I have a working theory on how it spreads: one AI-generated article hallucinates a detail, sites running AI content pipelines scrape and regurgitate it, more AI-generated sites scrape the same misinformation, and suddenly a made-up algorithm update has citations. For a RAG-based system like Perplexity or AI Overviews, enough citations are basically all it needs to treat something as fact, regardless of whether it’s actually true.

I used Claude to help visualize the “AI Slop Loop” – the cycle of AI-generated misinformation (Image Credit: Lily Ray)

At this point, I’d consider this common. I recently had a client send me SEO/GEO information that was factually incorrect, pulled straight from AI-generated slop on a random, vibe-coded agency blog. The client had no idea. I believe that if you’re trying to learn about SEO or AI search directly from an LLM, this is, unfortunately, an increasingly likely outcome.

I ran similar testing during Google’s March 2026 core update and found multiple AI-generated articles already claiming to share the “winners and losers” while the update was still rolling out.

The articles start with vague, generic filler about core updates that doesn’t actually say anything:

Image Credit: Lily Ray

Then they list “winners and losers” without citing a single site, leaning on vague, generalized claims that sound plausible and fill the void left by a lack of reliable information:

Image Credit: Lily Ray

Unsurprisingly, their sites are filled with AI-generated images, AI support chatbots, and other clear signals that little – if any – human involvement went into creating this content.

Image Credit: Lily Ray

The Era Of AI Misinformation

If someone on the internet says it, according to AI, it must be true.

That’s the reality for the vast majority of people using AI search today. Only about 50 million of ChatGPT’s 900 million weekly active users are paying subscribers, meaning roughly 94% are on the free tier. Google’s AI Overviews and AI Mode are free by design – and AI Overviews reached over 2 billion monthly active users as of mid-2025.

These are the models most AI users are currently interacting with, and they have no real mechanism for distinguishing between information that’s true and information that’s simply repeated across enough sources. Repetition is treated as consensus. If enough sources say it, it becomes fact, regardless of whether any of those sources involved a human who actually verified the claim.

Putting The Problem To The Test

I recently spoke to journalists from both the BBC and the New York Times about the problem of misinformation in AI-generated responses. In the case of the BBC article, the author Thomas Germaine and I tested publishing fictitious blog posts on our personal sites to see whether AI Overviews would present the made-up information as fact, and how quickly.

Even knowing how bad the problem was, I was alarmed by the results.

On my personal blog, in January 2026, I published an AI-generated article about a fake Google core update, which never actually happened. I included the detail that Google “approved the update between slices of leftover pizza.” Within 24 hours, Google’s AI Overviews was confidently serving this fabricated information back to users:

(Note: I’ve since deleted the article from my site because it was showing up in people’s feeds and being covered on external sites, further contributing to the exact problem I’m pointing out here!)

Image Credit: Lily Ray

First, AI Overviews confirmed that there was indeed a core update in January 2026. As a reminder: There was not. My site was the only source making this claim, and that was apparently enough to trigger the AI Overview.

Next, I asked it about the pizza, and it responded accordingly:

Image Credit: Lily Ray

Better yet, the AI Overview found a way to connect my fabricated pizza detail to a real incident: Google’s struggles with pizza-related queries in 2024. It didn’t just regurgitate the lie – it contextualized it.

ChatGPT, which is believed to use Google’s search results, quickly surfaced the same fabricated information, though it at least flagged that the announcement didn’t match Google’s formal communications:

Image Credit: Lily Ray

I deleted my article after getting messages from people who had seen my fake information circulating via RSS feeds and scrapers. I knew it was easy to influence AI responses. I didn’t know it would be that easy.

I also wondered whether my site had an advantage, given its strong backlink profile and established authority in the SEO space.

So I spoke to the BBC journalist, Thomas Germaine, and he put this to the test on his personal site, which generally received very little organic traffic. He published a fictitious article about the “Best Tech Journalists at Eating Hot Dogs,” calling himself the No. 1 best (in true SEO fashion).

According to Thomas’ article in the BBC, within 24 hours, “Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.”

To be fair: the query Thomas chose was niche enough that very few users would ever actually search for it, which is exactly what Google pointed out in its response to the BBC. When there are “data voids,” Google said, this can lead to lower quality results, and the company is “working to stop AI Overviews showing up in these cases.” My main question is: When? The product has already been live for 2 years!

Why Data Voids Aren’t A Great Excuse

Data voids may contribute to the problem, but in my opinion, they don’t excuse it. These AI responses are being consumed by hundreds of millions of users, and “we’re working on it” isn’t an answer when the systems are already deployed at that scale.

In the New York Times article, “How Accurate Are Google’s A.I. Overviews?,” the actual scale of this problem was put to the test. According to the data found in the study, Google’s AI Overviews were accurate 91% of the time. This sounds decent until you actually do the math: With Google processing over 5 trillion searches a year, this suggests that tens of millions of erroneous answers are generated by AI Overviews every hour.

To make matters worse: Even when AI Overviews were accurate, 56% of correct responses were “ungrounded,” meaning the sources they linked to didn’t fully support the information provided. So more than half the time, even when the answer happens to be right, a user clicking through to verify it would find sources that don’t actually back up what they were just told. That number also got worse with the newer model – it was 37% with Gemini 2 and rose to 56% with Gemini 3.

The NYT article drew hundreds of comments from users sharing their own experiences, and the frustration was palpable. The core complaint wasn’t just that AI Overviews get things wrong – it’s that they never admit uncertainty. AI Overviews deliver every answer with the same confident, authoritative tone, whether the information is right or completely fabricated, which means users have no reliable way to distinguish reliable information from hallucination at a glance.

As many commenters pointed out, this actually makes search slower: Instead of scanning a list of sources and evaluating them yourself, you now have to fact-check the AI’s summary before doing your actual research. The tool, supposedly designed to save time for the user, is now creating double work for the user.

Some of the comments also reinforced my same concerns about AI answers citing made-up, AI-generated content. Multiple users described what amounts to the same misinformation cycle: AI systems training on AI-generated content, citing unvetted Reddit posts and Facebook comments as authoritative sources, and producing a self-reinforcing loop of degrading quality. Several commenters compared it to making a copy of a copy. Even the defenders of AI Overviews admitted they still need to verify everything, which sort of undermines the core premise: that AI-generated answers save users time and effort.

How “Smarter” LLMs Are Attempting To Fix the Problem

It’s worth monitoring how the AI companies are attempting to solve these problems. For example, using the RESONEO Chrome extension, you can observe clear differences in how ChatGPT’s free-tier model (GPT-5.3) responds compared to GPT-5.4, the more capable model available only to paying subscribers.

For example, when asking about the recent March 2026 Core Algorithm Update, I used ChatGPT’s more capable “Thinking” model (5.4). The model goes through six rounds of thinking, much of which is clearly intended to reduce low-quality and spammy information from making its way into the answer. It even appends the names of trustworthy people with authority on core updates (Glenn Gabe & Aleyda Solis) and limits the fan-out searches to their sites (site:gsqi.com and site:linkedin.com/in/glenngabe) to pull up higher-quality answers.

Image Credit: Lily Ray

This is a step in the right direction, and the model produces measurably better answers. According to OpenAI’s own launch announcement, GPT-5.4’s individual claims are 33% less likely to be false, and its full responses are 18% less likely to contain errors compared to GPT-5.2. GPT-5.3, the model available to free users, also improved over its predecessor. According to OpenAI’s own data, it produces 26.8% fewer hallucinations than prior models with web search enabled, and 19.7% fewer without it.

But these improvements are tiered. The most capable model is paywalled, and the free-tier model, while better than what came before, is still meaningfully less reliable. Other major AI platforms follow the same pattern: better reasoning and accuracy reserved for paying subscribers, faster and cheaper models for everyone else. The result is that the 94% of ChatGPT users on the free tier, and the billions of users interacting with free AI search products like AI Overviews are getting answers from models that are more likely to be wrong and less equipped to flag uncertainty.

This is the part that makes me most uncomfortable: Most of these users probably don’t realize the gap exists. AI is being marketed everywhere: Super Bowl ads, billboards, and product launches framing AI as the future of knowledge. People see “ChatGPT” or “AI Overview” and assume they’re interacting with something that knows what it’s talking about. They’re probably not thinking about which model tier they’re on, or whether a paid version would give them a materially different answer to the same question.

I understand the economics. These companies need to scale, and offering free tiers drives adoption. But in my opinion, it is irresponsible to deploy these products to billions of people, frame them as “intelligence,” and then quietly reserve the more accurate versions for the fraction of users willing to pay. Especially when the free versions (including the one at the top of Google search) are this susceptible to the kind of misinformation documented throughout this article.

The Burden Of Proof Has Shifted

The September 2025 “Perspectives” Google update still doesn’t exist. But if you ask an LLM about it today, it will still tell you about it with complete confidence. That hasn’t changed in the months since I first flagged it, and it probably won’t change anytime soon, because the content that fabricated it is still indexed, still cited, and still being used to generate new content that references it as fact. The AI slop misinformation cycle continues.

This is what makes the problem so difficult to fix. It’s not a single hallucination that can be patched. It’s a feedback loop that compounds over time, and every day that these systems are live at scale, the loop gets harder to break. The AI-generated slop that seeded the original misinformation is now part of the training data and used as a retrieval source for the next batch of AI-generated answers.

I don’t think the answer is to stop using AI. But I do think it’s worth being honest about what these products actually are right now: prediction engines that treat the volume of information as a proxy for its accuracy. Until that changes, the burden of fact-checking falls on the user. And most users don’t know they’re carrying it, let alone have the time or inclination to do it.

I would warn marketers or publishers trying to take SEO or GEO advice from large language models: the information is contaminated, and should always be verified by real experts with experience in the field.

More Resources:


This post was originally published on Lily Ray NYC Substack.


Featured Image: elenabsl/Shutterstock

https://www.searchenginejournal.com/the-ai-slop-loop/572090/




The Modern SEO Center Of Excellence: Governance, Not Guidelines via @sejournal, @billhunt

Most enterprise SEO Centers of Excellence (CoE) fail for a surprisingly simple reason. They were built to advise, not to govern.

On paper, the idea of an SEO CoE is appealing. Centralized expertise. Shared standards. Training and enablement. Documentation that can be reused across markets. In theory, it should bring order to complexity.

In practice, it rarely does.

Most SEO CoEs operate without any real authority over the systems that determine search performance. They publish recommendations that teams are free to ignore. A CoE without governance power becomes a spectator to the very failures it was meant to prevent. This weakness stayed hidden for years because traditional search was forgiving.

Inconsistencies could be corrected downstream. Signals recalibrated. Rankings recovered. But modern search, especially AI-driven discovery, is far less tolerant. Visibility is now shaped by structure, consistency, and machine clarity across the entire digital ecosystem.

Those outcomes cannot be achieved by advisory groups alone. They require operational governance embedded into how digital assets are designed, built, and deployed.

The future of SEO Centers of Excellence isn’t about sharing knowledge more efficiently. It’s about controlling the standards that shape digital assets before they exist.

What We Mean By A Modern SEO Center Of Excellence

A Center of Excellence, in its simplest form, is meant to centralize expertise and standardize how work is done across a complex organization. In theory, it exists to reduce duplication, improve quality, and create consistency at scale.

A modern SEO CoE functions as a governance body. Its responsibility is to define, enforce, and audit the standards that determine how digital assets are designed, built, and deployed across the enterprise.

This distinction matters more than most organizations realize. A CoE is not effective because teams agree with it or appreciate its expertise. It is effective because compliance with its standards is required.

When organizations confuse documentation with governance, they end up with extensive guidelines and minimal change. Standards exist, but adherence is optional. Exceptions multiply quietly. Leadership assumes SEO is being handled because materials have been produced.

Governance is what closes that gap. It transforms SEO from advice into infrastructure.

The Legacy CoE Problem

Traditional SEO Centers of Excellence were designed for a very different operating reality. SEO was treated as a marketing discipline, and visibility was shaped largely by page-level tactics that could be reviewed and corrected after launch. In that environment, guidance, training, and periodic audits were often sufficient to produce incremental gains.

As a result, most legacy CoEs were built around education rather than enforcement. They created playbooks, audited markets, trained local teams, and advised on fixes. What they did not have was authority over the systems that actually determined outcomes – development standards, templates, structured data policies, or product requirements. SEO success depended on persuasion rather than process.

Over time, the CoE became a library of best practices instead of an operating body. The problem was never a lack of knowledge. It was a lack of authority.

That distinction has been understood for decades. Nearly 20 years ago, Search Marketing, Inc., co-authored with Mike Moran, laid out the operating requirements for enterprise-scale search programs, including centralized standards, cross-functional integration, executive sponsorship, and accountability beyond marketing. The model assumed – correctly – that search performance at scale required structural ownership, not optional recommendations.

Where enterprises struggled was not in understanding that model, but in implementing it inside organizations unwilling to centralize control over digital standards. Many adopted the language of a Center of Excellence without adopting the authority required to make it effective.

Why Governance Is Now Mandatory

Search no longer evaluates isolated pages. It evaluates whether an organization presents itself as a coherent system.

As search engines and AI-driven discovery layers have evolved, they’ve shifted from asking “Which page is most relevant?” to “Which sources can be consistently understood and trusted?” That determination isn’t made at the page level. It emerges from how information is structured, reused, governed, and reinforced across an enterprise.

This is where most organizations begin to struggle. In the absence of centralized governance, decisions that affect search performance are made independently across markets, platforms, and teams. Templates evolve to meet local needs. Content adapts to brand or legal constraints. Structured data is implemented differently depending on tooling or vendor preference. None of these choices are irrational on their own. But taken together, they fragment the system’s signal.

Modern search systems respond poorly to fragmentation. When entity definitions vary, taxonomy drifts, or structural rules aren’t consistently enforced, machines can no longer form a stable representation of the brand. The result isn’t a gradual decline that can be corrected with optimization. It’s exclusion. AI-driven systems simply route around sources they cannot reliably interpret and default to alternatives that appear more coherent.

This is the inflection point that makes governance mandatory rather than optional. Best practices and guidelines assume voluntary compliance. They work only when teams are aligned, incentives are shared, and deviations are rare. Enterprise environments rarely meet those conditions. Without enforcement, standards erode quietly, exceptions multiply, and inconsistencies become embedded before anyone notices the impact externally.

Governance is what closes that gap. It ensures that the structural decisions shaping discoverability are made intentionally, enforced consistently, and reviewed before they harden into production. In modern SEO, that level of control is no longer a nice-to-have. It’s the prerequisite for visibility.

What A Real SEO CoE Must Control

A modern SEO Center of Excellence cannot remain advisory. To function as governance, it must have authority across a small number of clearly defined domains where search performance is created or destroyed at scale.

These are not tactical responsibilities. They are control points across five critical areas.

1. Platform & Template Standards

At scale, templates, not individual pages, determine crawlability, eligibility, and consistency. When SEO has no authority over templates, every market, product line, or release becomes a new risk surface, and structural mistakes are replicated faster than they can be corrected.

Governance here does not replace engineering judgment. It defines the non-negotiable requirements that engineering solutions must satisfy before they reach production. In practice, this means the CoE governs standards for:

  • Page templates and rendering rules.
  • Technical accessibility requirements.
  • Metadata and URL frameworks.
  • Structured data deployment patterns.

2. Entity & Structured Data Governance

In AI-driven search, entity clarity determines whether a brand is understood or ignored. Fragmented schema does not merely weaken signals; it fractures identity.

A governing CoE must own how the organization defines itself to machines, ensuring consistency across properties, platforms, and markets. This is not about marking up more fields. It is about protecting signal integrity.

That responsibility includes control over:

  • Entity definitions and relationships.
  • Schema standards and implementation rules.
  • Canonical brand representation.
  • Cross-property and cross-market consistency.
  • Alignment between legal constraints and brand expression.

Without centralized ownership, entity signals drift – and visibility follows.

3. Content Commissioning Standards

One of the most important shifts in modern SEO is where governance occurs in the content lifecycle. A governing CoE does not review content after publication. It defines what qualifies for creation in the first place. By setting structural and intent-based requirements upstream, it eliminates downstream debate and rework.

This means governing:

  • Content structure and format requirements.
  • Intent mapping and coverage frameworks.
  • Depth and completeness expectations.
  • Internal linking rules.
  • Topic and market rollout models.

When these standards are enforced before content is commissioned, SEO stops negotiating outcomes and starts shaping inputs.

4. Cross-Market Consistency

Global organizations need flexibility, but flexibility without oversight quickly turns into fragmentation. A governing CoE ensures that deviations from global standards are visible, intentional, and accountable. It does not eliminate local autonomy; it prevents unintentional conflict.

This requires authority over:

  • Global standard adoption.
  • Local deviation review and approval.
  • Hreflang governance.
  • Language-versus-market resolution.
  • Canonical ownership rules.

Without centralized oversight, local teams often send conflicting signals that quietly erode global visibility.

5. Measurement & Accountability Integration

Finally, governance fails if it cannot be measured and enforced. A real SEO CoE controls not just reporting, but accountability. If search performance represents systemic risk, it must be monitored and escalated like one.

That includes ownership of:

  • SEO performance standards.
  • Reporting frameworks.
  • Shared key performance indicators across departments.
  • Compliance monitoring.
  • Escalation authority and executive visibility.

SEO must be measured as infrastructure, not as a marketing channel. When failures carry organizational consequences, governance becomes real.

Control Vs. Influence: The Critical Difference

Most SEO Centers of Excellence operate through influence. They publish best practices, provide training, and offer guidance in the hope that teams will comply. When alignment exists and incentives are shared, this approach can work.

Enterprise environments rarely meet those conditions.

Influence depends on cooperation. It assumes teams will voluntarily prioritize SEO standards alongside their own objectives. When deadlines tighten or tradeoffs arise, influence is the first thing to give way. What remains are local decisions optimized for speed, risk avoidance, or revenue, not for long-term discoverability.

Governance operates differently.

A governing SEO CoE does not dictate how teams build solutions, but it does define the non-negotiable requirements those solutions must satisfy. It establishes mandatory operating standards for templates, structured data, entity representation, and market compliance, and it embeds those standards into workflows before assets are released.

This distinction is often misunderstood as “SEO trying to control everything.” In reality, governance is about oversight, not micromanagement. Engineering still engineers. Product still prioritizes. Markets still localize. But all of them operate within enforced constraints that protect search visibility as a shared enterprise asset.

That difference becomes visible in where authority actually exists. Advisory CoEs can recommend standards, but they cannot enforce template compliance, approve deviations, require pre-launch checks, or escalate violations. Governing CoEs can. Enterprise SEO only scales under that model. Not because teams agree with SEO, but because the organization has decided that discoverability is important enough to be protected by enforceable standards.

Organizational Impact Of A Governing CoE

When SEO governance is institutionalized, the effects extend well beyond search metrics.

Structural errors begin to decline, not because teams are fixing issues faster, but because many of those issues never make it to production. Standards enforced upstream prevent the same mistakes from being replicated across templates, markets, and releases. SEO shifts from remediation to prevention.

Visibility improves for the same reason. When signals are consistent and scalable, search systems can form a stable understanding of the brand. That consistency compounds over time, reinforcing eligibility rather than constantly resetting it.

Markets also begin to align more naturally. Governance doesn’t eliminate local flexibility, but it requires that deviations be explicit, reviewed, and justified. Instead of fragmentation happening quietly, exceptions become visible and accountable. Global coherence stops being accidental.

In AI-driven discovery, this coherence becomes even more valuable. Eligibility improves not through tactical optimization, but because entities, content, and relationships are structured in ways machines can reliably interpret. Brands stop competing on individual pages and start competing as systems.

Perhaps most noticeably, internal friction drops. When SEO standards are embedded into workflows, teams stop renegotiating fundamentals on every launch. The same conversations don’t have to happen repeatedly, and escalation becomes the exception rather than the norm.

Counterintuitively, this increases speed. When governance defines the rules of the road, execution accelerates because teams can focus on building within known constraints instead of debating them after the fact.

The Final Reality

Enterprise SEO rarely fails because teams aren’t trying hard enough. It fails because governance is missing.

Over the years, I’ve helped design and implement Search and Web Effectiveness Centers of Excellence inside large organizations. The ones that worked best all shared a common trait: They had real authority to guide and enforce compliance. Not heavy-handed control, but clear standards backed by the ability to say no when those standards were ignored.

What’s often misunderstood is that these governing CoEs were also the most collaborative. Because authority was clear, teams didn’t have to renegotiate fundamentals on every project. Everyone understood the shared goals and the mutual benefits of operating as a coordinated system rather than as isolated functions. Governance removed friction instead of creating it.

Those CoEs succeeded by treating search visibility as a team sport. Cross-department initiatives weren’t exceptions; they were the operating norm. Development, content, product, and marketing aligned around enterprise objectives because the value of doing so was explicit and reinforced through process, not persuasion.

By contrast, CoEs built solely to advise rarely achieved that alignment. Without enforcement, standards became optional, exceptions multiplied, and collaboration depended on goodwill rather than structure.

Modern search leaves little room for that model. Organizations that want to maintain control over how they are discovered, understood, and recommended must move beyond documentation and consensus-building alone. Governance is what makes collaboration durable. It turns good intentions into repeatable outcomes.

In an AI-driven search environment, that shift is no longer aspirational. It is the difference between being represented accurately and being replaced quietly by sources that are.

More Resources:


Featured Image: Masha_art/Shutterstock

https://www.searchenginejournal.com/the-modern-seo-center-of-excellence-governance-not-guidelines/566097/




Why Your Search Data Doesn’t Agree (And What To Do About It) via @sejournal, @coreydmorris

The quarterly business review is upon us. We pull reports from Google Analytics 4, Search Console, Google Ads, and customer relationship management, and we find that none of them match. In fact, despite being connected to the same campaign and focus, they are quite different.

This is work done, data collected, and reported back to us from multiple platforms that are tracking for the same campaign, same time period, and yet giving us different numbers.

This isn’t a new issue, but in my experience, it’s becoming a bigger issue.

Privacy changes, continued attribution modeling challenges, platform silos, and even ways that they allow us to customize or configure for conversions contribute to the problem. And I’ve made it this far in writing this article before mentioning AI and LLM traffic that adds another layer of ambiguity.

The issue isn’t simply bad data. It is the fact that search data is coming from different systems that have different purposes. Those different purposes result in different tracking and collection methods, creating a maze or puzzle for us to try to piece together, often with pieces that don’t fit.

With this problem comes a business risk. Conflicting data can slow decision-making or create distractions from the most important decisions at hand, sending teams down detailed paths (and distractions) trying to make the data work and questioning it.

Sometimes, when metrics don’t align, this can signal a deeper issue in an over-reliance on channel-specific key performance indicators, a lack of shared definitions of success by stakeholders, and can create tension.

When SEO says traffic is up, paid search shows conversions are down, and the CRM pipeline data shows things are flat, we can get off into the territory of trying to figure out which one is right and where the gap is. Trying to “fix” the numbers until they match, though, is often the wrong reaction, as our approach should be rooted in understanding what each set of data is actually telling us to guide our strategies and decisions.

There are many factors that we can incorporate into our understanding, working with conflicting data, and even the acceptance of a problem that we can’t change, but must navigate.

Understand And Accept That Platforms Measure Different Things

Different platforms measure different things. Yes, they might sound the same, or be named the same thing in a report or as a KPI, but in many cases, they are tracked and measured in a fundamentally different way.

For example:

  • GA4: Measures sessions, events, and modeled behavior, with own tag and collection method.
  • Google Ads: Measures ad interactions and own platform measured and attributed conversions, with own tag and collection method.
  • Search Console: Provides impressions, click data, and other anonymous and aggregated data, not directly tracked or sourced, the way that data is collected by GA4.
  • CRM: Typically tracks actual visitors who have been identified and through opportunities, leads, and to/through revenue.

The differences in metrics, as well as collection methods, inherently will always result in different numbers and data points, which may or may not seem close to telling the same story.

Identify Common Causes Of Data Discrepancies

Beyond the basic metrics and KPIs, we want to go deeper and map out how performance looks overall. That means we have to get into attribution models. Those can be as simple as first touch, last click, or some other data-driven formula.

However, there might be obvious tracking gaps where forms, calls, or offline conversions occur that our systems can’t pick up. Plus, privacy changes related to consent mode, cookies that aren’t able to be leveraged, time lags (does anyone else have 50 tabs open for 100 days at a time like me?), and even cross-device search behavior.

Again, many of these are not new, but they seem to be amplified, and we can forget about them when looking at data without challenging assumptions or seeking what might be a gap or not collected.

My team has recently been in a fight against bots and spam, and we have been testing and navigating site-wide validation tools, which can create gaps in capturing referral headers or strip UTM parameters as well if not implemented properly.

Define Sources Of Truth And Hierarchy

With all the tech, tools, collection methods, and overall sources, we can have information overload and a whole host of conflicting sources that we’re working to understand and reconcile differences within.

I contend that not all data is equal when it comes to answering performance questions.

Example data that we’re seeking and key sources:

  • Revenue & Pipeline: CRM.
  • Leads: CRM, and/or trusted, validated platform conversion metrics.
  • On-Site Behavior: GA4.
  • Search Visibility: Search Console.
  • Ad Performance: Google Ads, other native ad platforms.

A shift in thinking might be that we have to stop trying to make one platform answer every question. The perfectionist in me struggles with having to say that, but it is the reality of the data source and attribution world we live in.

Align Metrics To Business Outcomes

I know that many marketing leaders, teams, and agencies inherit metrics and historical performance data. It isn’t always easy to reconfigure KPIs, make quick changes, or to be able to start tracking and reporting on things differently.

Marketing may be accountable for channels and platforms, while sales (and/or other functions) are looking at things further downstream, like leads, pipeline, and ultimately revenue.

When it comes to search marketing, and where we’re going with being found as well in LLMs, centering more on the connection between search marketing and business outcomes (not channels) is important. This isn’t a new concept, but one that warrants focus and investment as it won’t get less important over the coming months and years. This is a priority area to put marketing leadership focus.

Create Consistent Definitions Across Roles & Teams

With different definitions, collection methods, platforms, and data sources different roles and teams look at, by default, we likely are speaking some of the same language, but with very different definitions.

It is hard enough to manage the data; it can be impossible to move forward when it comes to how data is used and interpreted for different purposes.

What is a “conversion”? What counts as a “qualified lead”? How is “revenue” tracked? What is the source of truth for how a lead “source” is defined?

Definitions are often a bigger driver of misalignment than the data itself.

Use Trends When Exact Matches Are Not Realistic

Assuming you have accepted the truth that we can’t make all the data sources perfectly match, we can still find meaning in the data we’re looking at.

That comes in what we see in terms of trends. Are things trending across sources and data points in the same direction? Are there spikes or drops that we see consistently across platforms and sources?

Comparing and contrasting anomalies, finding trends, and understanding them can help us identify where data doesn’t match and where the level of precision doesn’t have to be perfect as we look for consistency, direction, and the outcome of what happened.

Close The Gap Between Marketing And CRM

I still sometimes get looked at a little funny when working with the CRM administrator or decision maker who sits outside of marketing, when asking about non-digital marketing leads, data, and offline sources.

I advocate that, even if we’re just focused on digital or search marketing, we push for offline conversion imports, CRM feedback that is specific to the campaigns and channels/platforms that we’re focused on, and respective lead quality scoring.

We need to understand the business side of the data connected to our efforts in digital marketing and search. The better integrated the data, the more feedback we get, and the more collaboration of sources, the more impactful our efforts can be.

Educate Stakeholders On Why Data Won’t Match

In working with other C-suite leaders, executives, or stakeholders, you might find that they are used to a world of accounting, financial metrics, and more consistent data and absolutes. The fact that marketing data sources don’t match could be a big concern for them.

Keeping that in mind, it will serve you well to educate stakeholders and to prioritize their focus on what matters, the things we’ve unpacked already in this article.

It can derail a meeting fast when the numbers don’t match, don’t make sense, or create confusion. When the numbers can’t help connect the dots, they often create new questions, erode confidence, and take the conversation away from the overall business alignment and impact of the marketing efforts.

Develop The Performance Narrative, Not Just Dashboards

We naturally live in a world of dashboards with performance marketing, digital marketing, and search. We have the ability to track so much and have it all at our fingertips, sourcing from all of the various places we track and measure the impact of our work.

While it may be clear to you, looking at a complex dashboard, what the takeaways are, it will be confusing, distracting, and possibly misleading for everyone else.

Reporting shouldn’t just show numbers as it should explain what is happening, why, and what to do next. In your role in marketing leadership and subject matter expertise, your ability to shift from being a reporter of data to an interpreter of broader performance connected to strategy and business outcomes is a noble calling.

In Summary

Data conflicts and disagreements aren’t a flaw or evidence of an error (although you need to regularly audit to make sure you trust the collection and don’t have gaps). It is a reality of digital and search marketing.

When our varying roles, teams, and stakeholders understand this, we can shift our focus to the importance of mapping to business outcomes and leveraging our data for decisions, versus being distracted by the nuances of things that we can’t ultimately exact match and reconcile.

Our goal isn’t to make the numbers match. It is to be able to make informed and confident decisions to drive business outcomes and success.

More Resources:


Featured Image: Accogliente Design/Shutterstock

https://www.searchenginejournal.com/why-your-search-data-doesnt-agree-and-what-to-do-about-it/570180/




Google Just Made It Easy For SEOs To Kick Out Spammy Sites via @sejournal, @martinibuster

Google updated their report-a-spam documentation to reflect that the feature may now be used to initiate manual actions against websites that are found to be spamming. This is a change in policy that gives SEOs the opportunity to now have their spam reports potentially go into the queue for a manual action.

Change In Spam Report Policy

The previous spam reporting documentation previously said that Google would not use the spam reports for taking actions against websites.

This wording was mostly removed:

“While Google does not use these reports to take direct action against violations, these reports still play a significant role in helping us understand how to improve our spam detection systems that protect our search results.”

That part is narrowed to emphasize that the submitted spam reports help improve their spam detection systems:

“These reports help us understand how to improve the spam detection systems that protect our search results.”

More Aggressive Approach To Spam

Google also added new wording to make it clear that Google may use the spam reports to take manual actions against websites. Google used to refer to manual actions in terms related to penalization but it may be that the word “penalization” carries connotations of punishment which isn’t what Google is doing when they remove a site from the index. It’s not a punishment, it’s just a removal from the index.

Google’s new wording makes it clear that taking manual action against reported sites are now an option:

“Google may use your report to take manual action against violations. If we issue a manual action, we send whatever you write in the submission report verbatim to the site owner to help them understand the context of the manual action. We don’t include any other identifying information when we notify the site owner; as long as you avoid including personal information in the open text field, the report remains anonymous.”

Everything else about the page is the same, including the button for filing a spam report.

Screenshot: Spam Report Button

Clicking the “Report spam” button leads to a form that now can lead to a manual action:

Screenshot: Spam Report Form

Is This Good News For SEOs?

Site owners and SEOs who are sick of seeing spammy sites dominating the search results may want to check out the new page and start reporting actual spammy websites. Nobody really enjoys spam and now there’s something users can do about it.

Featured Image by Shutterstock/NLshop

https://www.searchenginejournal.com/google-just-made-it-easy-for-seos-to-kick-out-spammy-sites/572118/




Google Search Console Glitch Gives SEOs A Scare via @sejournal, @martinibuster

Google Search Console erroneously sent out emails to site owners advising them that Google has just started to record impressions beginning on April 12th. The implication of the message is that Search Console has not previously been collecting those impressions, which is incorrect.

Search Console Impressions

The Search Console impressions report shows how often a site appeared in Google’s search results, regardless of whether or not users clicked. The impressions report by itself is not the metric to pay attention to, but rather the meaningful metrics are t he associated keywords and their positions in the search results. This enables an SEO to identify high value keyword performance and to enable better decisions on addressing performance shortcomings.

The report breaks queries down by:

1. Queries (What people searched)

2. Pages (Which URLs showed up)

3. Countries (Where searchers were located geographically)

4. Devices (Desktop, Mobile, and Tablet)

5. Search Appearance (shows if the impressions are from Rich Results, Videos, Web Light, and Merchant Listings)

Actual Search Console Reporting Errors

Google sent the following message to Search Console users:

“Google systems confirm that on April 12, 2026 we started collecting Google Search impressions for your website in Search Console. This means that pages from your website are now appearing in Google search results for some queries. Here’s how you can monitor your site’s Search performance using Search Console.”

This is an interesting message because it comes after it was disclosed that Google had been incorrectly reporting impressions since May 13, 2025. A note in a Google Support page from April 3 explained:
https://support.google.com/webmasters/answer/6211453#performance-reports-search-results-discover-google-news&zippy=%2Cperformance-reports-search-results-discover-google-news

“A logging error is preventing Search Console from accurately reporting impressions from May 13, 2025 onward. This issue will be resolved over the next few weeks; as a result, you may notice a decrease in impressions in the Search Console Performance report. Clicks and other metrics were not affected by the error, and this issue affected data logging only.”

Is today’s erroneous note related to any fixes made to the impressions report? Google’s John Mueller described it as just a glitch.

Mueller posted remarks on Bluesky about the message in response to a query about it:

“Sorry – this is just a normal glitch, unrelated to anything else.”

It’s a curious because it appears that the impression reporting errors and this erroneous messaging may be related. Are they related or is it just a glitch?

https://www.searchenginejournal.com/new-google-search-console-message-glitch-gives-seos-a-scare/572072/