Why Google Discover Is No Longer Just For Publishers via @sejournal, @theshelleywalsh

At Google Search Central Live in Zurich last December, Clara Soteras spoke about how brands and ecommerce sites can use Discover as a strategy.

Discover is a dominant source of traffic for news publishers, and currently, is a potential channel that has resisted the encroachment of AI. So, I was interested to see how Discover might hold opportunities beyond the newsroom.

However, in Zurich, John Mueller repeated his advice that Google Discover traffic is for free, and someday it can be zero. Much like the reality of Google traffic diminishing for many brands.

So, I sat down with Clara on IMHO to talk about what’s working, what’s breaking, and where the real opportunity lies in 2026.

Clara is head of innovation and digital strategy at AMIC and a professor at the Autonomous University of Barcelona, where she teaches SEO for news at several business schools.

“Discover adds to you some possibility to catch and to impact people that don’t know that they need you.”

Discover Is The Primary Channel, But With A Warning

In an article titled “Why publishers should worry about growing reliance on Google Discover,” the Press Gazette reported that for 2,000 global news and media websites, 68% of Google traffic now comes from Discover, in comparison to 32% from search.

I asked Clara if she thought Discover could offer any salvation to news publishers impacted by AI.

She confirmed what many publishers are experiencing, “Google Discover is the first channel of traffic for the majority of publishers today. And we need to understand that this is a good channel to achieve and to catch different audience, to achieve page views and volume of traffic.”

But Clara was quick to draw a line between Discover and traditional search. The ranking factors are different, and publishers who treat Discover as an extension of their search strategy are making a mistake.

“The basic things are the same, but we need to know that the location, the image, or the title, the headline are really important to rank on Google Discover.”

She also noted that not all content categories perform equally in the feed. Politics, for example, rarely appears. Publishers who want Discover visibility need to lean into lifestyle, sports, and culturally relevant content.

It’s For Free. And Someday It Can Be Zero

At Google Search Central Live in Zurich, John Mueller emphasized a key point that publishers cannot rely on getting 90% of their traffic from a single source. I asked Clara whether we might be in danger of swapping one reliance for another, from Google SERP traffic to Discover, and if publishers should be leveraging other channels.

Clara explained, “In Zurich, John Mueller repeats the same advice that Google is telling in every session that we have with the publishers. They think that the volume of traffic that Google Discover adds to your website is for free, and someday it can be zero.”

That volatility is real, “We know some publishers that start from scratch and achieve a lot of traffic and six months later they need to close the website because they lose all the traffic.”

When I asked what balance she would recommend, Clara said it depends on the size and niche of the publisher, but “you cannot have 90% of Google Discover traffic because if Google Discover decides to not see you tomorrow, you will lose all your audience because it’s not a loyal audience.”

Discover traffic is passive. It’s algorithmically injected into feeds, and Clara suggested that publishers need to diversify into social strategy, work with content creators, and consider building community around their topics.

How Brands Can Win In Discover

Clara’s presentation in Zurich was about a strategy most brands haven’t considered, applying newsroom methodology to Discover for ecommerce and brand sites.

Google has expanded the Discover feed to allow users to follow entities, creators, and companies, not just traditional publishers. YouTube, Instagram, and content creator profiles now appear, and for brands, this opens a different kind of opportunity.

“Search is the first channel probably for commerce because the user knows your brand or knows what they need. Discover adds the possibility to catch and impact and generate impressions to people that don’t know that they need you.”

Her methodology for brands mirrors what high-performing newsrooms do, which is to monitor social conversations and trends, align content to the moment, and move quickly.

“If we decide to create a strategy for a brand, we need to talk about the trend of the day. We need to talk about our product and service but really near to the trend.”

Clara is currently working with content teams at multiple companies to train them on Discover-specific execution. Headlines need to be more than 13 words, images must be chosen strategically for the format, and brands need to build entity authority through a sustained cluster strategy, publishing around the same entity on different days.

“For me, some of the best ranking factors are the image, the headline, and working with the entity every day to be a good reference for this entity.”

AI Content Can Rank In Discover, But It Doesn’t Perform

Andy Almeida from Google’s Trust and Safety team for Discover has said that nearly 20% of sites recommended by Discover are AI-generated, coining the memorable phrase “AI slop is taking over the world.” I asked Clara whether AI content is becoming a real threat that could displace legitimate news publisher content, and how publishers can defend against it.

Clara acknowledged the reality that AI content does rank on Discover. But she pointed out that Google’s quality and trust department is actively applying manual penalties when they identify AI content or fake news in the feed.

“If they think that you are publishing AI content or fake news, they can apply and ask that you need to raise or correct your content.”

More importantly, she shared a real-world example from her own clients that illustrates the performance gap between AI and human content.

“I had a client that worked with different AI tools to create basic content, and journalists adapted a little bit of this content. If you see the performance, you see only 100 views for an article versus 12,000 views for another article created by a human.”

Clara’s position is that AI can assist with ideation and strategic planning, but the content itself needs to be created by humans. Human-created content performs better in Discover, and she believes Google will continue to reward it.

“AI can give us ideas and strategic tips but for me it’s important that the content will be created by a human, by a journalist.”

The Opportunity AI Overviews Can’t Touch

With AI Overviews eating evergreen informational queries, I asked Clara what she thinks are the areas of opportunity in 2026.

Clara is currently working on research reports analyzing the impact of AI Overviews on publishers in Spain and the UK, and her data confirms, “ if you work with breaking news, you have an opportunity, because of the top stories module for breaking news.”

Real-time journalism still creates visibility that AI summaries cannot fully replace. For publishers looking ahead to 2026, Clara’s focus is on reinforcing the strengths that machines can’t replicate. Breaking news speed, entity authority, trend alignment, and diversification beyond Discover itself.

Human Expertise Is The Advantage

Google Discover is a powerful channel and in many cases essential, but it’s algorithmically volatile by nature.

However, it’s an opportunity for brands and ecommerce sites. The Discover feed is no longer a news-only space, and Clara’s work applying newsroom methodology to commercial content is an approach that most brands haven’t explored yet.

Where AI is reshaping both search results and the content that feeds them, the competitive advantage is ultimately down to human expertise, editorial judgment, and the ability to move fast on what matters right now.

Watch the full interview with Clara Soteras here:

[embedded content]

Thank you to Clara Soteras for offering her insights and being my guest on IMHO.

More Resources:


This post was originally published on Shelley Edits.


Featured Image: Shelley Walsh/Search Engine Journal

https://www.searchenginejournal.com/why-google-discover-is-no-longer-just-for-publishers/568777/




If AI Can’t Read Your CMS, It Can’t Recommend Your Brand [Webinar] via @sejournal, @lorenbaker

A Practical Audit for Marketing Leaders Using Enterprise-Level Content Management Systems (CMS)

AI-driven search is not a future consideration. It is already shaping how brands are discovered, evaluated, and chosen. 

Yet many CMS platforms were built for a different era of search, one focused on pages and rankings rather than structured content and machine interpretation. If your CMS cannot clearly communicate meaning to AI systems, your visibility is at risk long before a customer ever sees your site.

For CMOs and marketing leaders, this is no longer just an SEO discussion. It is a platform-level question. 

Or is your current website stack quietly limiting performance across discovery and conversion?

In this marketing leader webinar, we walk through a practical CMS audit framework designed to help marketing leaders evaluate whether their enterprise and large-scale website implementation is built for AI-driven search. 

You will gain a clear understanding of what AI readiness means at the system level and how to identify structural gaps before they impact growth.

What You’ll Learn

  • Where enterprise implementations most often fall short in AI driven discovery
  • How AI search is reshaping SEO strategy, content modeling, and conversion performance
  • What defines an AI ready CMS stack, including structured content and flexible architecture

Why Attend?

This webinar offers a strategic lens on whether your CMS is enabling visibility or restricting it. You will leave with a clear framework to assess risk, strengthen your digital foundation, and ensure your platform supports how discovery works today.

Register now to evaluate whether your CMS is prepared for AI-driven search.

🛑 Can’t attend live? Register anyway, and we’ll send the on-demand recording.

https://www.searchenginejournal.com/if-ai-cant-read-your-cms-it-cant-recommend-your-brand/568601/




Google Removes JavaScript SEO Warning, Says It’s Outdated via @sejournal, @MattGSouthern

  • Google removed outdated JavaScript and accessibility guidance from its documentation.
  • Google Search has rendered JavaScript well for years.
  • It’s the latest in a series of JS documentation updates.

Google removed its JavaScript accessibility guidance from help documents, saying the advice is outdated and noting it has rendered JavaScript for years.

https://www.searchenginejournal.com/google-removes-javascript-seo-warning-says-its-outdated/568829/




WordPress Releases AI Plugins For Anthropic Claude, Google Gemini, And OpenAI via @sejournal, @martinibuster

WordPress has created three new plugins that make it easy to add OpenAI, Google Gemini, or Anthropic Claude integration for the PHP AI Client SDK. The plugins enable text, image, function calling and web search support.

Requirements For Using WordPress AI Plugin

PHP 7.4 or higher is required to use these plugins and of course an API key for the AI model. Users on WordPress 6.9 will need to install the WordPress PHP AI client SDK. WordPress 7.0 which rolls out in early April will have the SDK integrated by default.

The official WordPress.org documentation for the PHP AI client explains:

“The PHP AI Client SDK …provides shared infrastructure that lets WordPress plugins and other PHP projects integrate AI capabilities rapidly and flexibly. …it offers a unified interface that works across all AI providers – from simple text generation to complex multimodal operations, streaming responses, and long-running tasks.

Developers specify what AI capabilities they need, and users manage their provider credentials in one place. Those credentials then work automatically across all compatible plugins on their site.”

WordPress AI Plugin Features

The features for the OpenAI plugin version are:

  • Automatic provider registration
  • Function calling support
  • Image generation with DALL-E models
  • Text generation with GPT models
  • Web search support

The description for the WordPress OpenAI plugin explains:

“Available models are dynamically discovered from the OpenAI API, including GPT models for text generation, DALL-E and GPT Image models for image generation, and TTS models for text-to-speech.”

Features for Anthropic Claude:

Automatic provider registration

Extended thinking support

Function calling support

Text generation with Claude models

Available features for Google Gemini integration:

  • Automatic provider registration
  • Function calling support
  • Image generation with Imagen models
  • Text generation with Gemini models

Download The WordPress AI Plugins

Find the WordPress AI Plugins here:

AI Provider for Anthropic

AI Provider for Google

AI Provider for OpenAI

https://www.searchenginejournal.com/wordpress-releases-ai-plugins-for-anthropic-claude-google-gemini-and-openai/568822/




Google Updates AI Mode Recipe Sites Results In Response To Backlash via @sejournal, @martinibuster

Robby Stein, VP of Product Google Search, posted that Google is updating AI Mode so that it surfaces more links to creators when users search for recipes. Google’s AI Mode has generated controversy by synthesizing multiple recipes into what many have taken to calling Frankenstein recipes. This new update aims to fix that by making it easier to tap and see a link to the recipe sites.

Change To How AI Mode Displays Recipes

What Google did was to create an attractive display of recipes that when clicked opens a side panel that displays recipe images and a summary of the recipe. The user can click from there to visit the recipe site and explore the dish in more depth.

Robby Stein said that this change is already rolled out. I tried variations of Stein’s example keyword phrase (easy recipes for two) and was able to spawn a recipe panel, what I think he’s referring to. On the left is a summary and in this specific AI Mode results, I had to scroll down to get to the images that can be clicked.

Screenshot Of AI Mode Without Images To Click

Scrolling down the page reveals a new topic with images that can be clicked.

Screenshot Of AI Mode Images That Can Be Clicked

The problem with this AI Mode result is that it’s not clear that those images can be clicked. They look like decorative images. It may be that a user will not going to intuitively understand that clicking those images will generate a side panel to the right with more information on that particular dish.

Screenshot Of AI Mode Panel With Recipe

Announcement By Robby Stein Of Google

Google’s Robby Stein made a direct mention of the “feedback” they had received about how AI Mode was handling meal ideas.

According to Robby Stein:

“We’ve heard feedback on recipe results in AI Mode, and we’re making updates to better connect people with recipe creators on the web. Starting today, when you search for meal ideas like “easy dinners for two,” you can tap on the dish to see links to relevant recipe sites, plus a short overview of the dish to help with inspiration.

We’re also planning to bring helpful information like cook time to more recipe results, which testers have found useful for deciding on a recipe. We know there’s more work to be done on this, so stay tuned for future updates.”

He also posted a video of the feature in action:

Featured Image by Shutterstock/Luis Molinero

https://www.searchenginejournal.com/google-updates-ai-mode-recipe-sites-results-in-response-to-backlash/568798/




Google Zero Is A Lie

There is a pervasive narrative doing the rounds in the publishing industry called “Google Zero.” This narrative, embraced by many industry leaders, poses that traffic from Google – Search and Discover – will decline and eventually become negligible.

This Google Zero narrative is entirely false, and extremely dangerous. And I’m going to explain why.

When the concept of “Google Zero” first began to emerge, I thought it could be a useful way to frame the strategic approaches publishers should consider when optimizing for Google. But it’s taken on an entirely different meaning, one that is actively dangerous and downright false.

It’s true, gaining traffic from Google hasn’t gotten easier. Websites need to work harder to grow their share of Google visits, both in Search and Discover. This is not a new development – the writing has been on the wall for nearly two decades.

Google started enriching its search results with all kinds of different elements in 2007, intended to provide exactly the kind of information Google’s users are looking for. The clean list of 10 blue links has long been forgotten.

Since Google began introducing new elements into its search results, every new feature has diverted clicks away from websites. Often, these clicks were channeled toward Google’s own properties like YouTube, Google Maps, or the image search vertical. And increasingly, searches didn’t result in any clicks at all when the right information was shown to the user directly on the results page.

This trend continued with every new feature Google introduced into its results. Many websites were affected. Lawsuits were launched – some of which are still ongoing.

News publishers didn’t really feel the pain, however. On the contrary, the introduction of news carousels on Google’s results increased the traffic Google sent to publishers.

And then AI Overviews arrived, and everybody panicked.

Apparently, The Verge’s Nilay Patel was the first to coin “Google Zero” as a phrase, though I suspect he was more than a little inspired by Sparktoro’s Amanda Natividad and Rand Fishkin, who have been talking about “Zero-Click Marketing” for years.

I understand why Nilay is worried about Google. According to Similarweb, Google traffic to The Verge has been steadily declining since late 2023, predating the launch of AI Overviews.

Similarweb data showing The Verge losing Google traffic
Image Credit: Barry Adams

Interestingly, this graph shows that Google is still the largest organic channel for The Verge, surpassed only by direct visits (which, by the way, are also declining). And you’ll also be interested to know that the periods of strongest Google traffic decreases on The Verge correlate with Google core algorithm updates and Site Reputation Abuse penalties.

I find it funny that The Verge seems to have an existential issue with the SEO industry as a whole. That, too, might contribute to their less-than-stellar performance in Search in recent years. Not to mention the fact that every channel is sending less traffic to The Verge in recent years.

Perhaps it’s not entirely Google’s fault that The Verge is experiencing a decline.

One website’s editor complaining about Google traffic doesn’t make for a narrative. Yet somehow, the Google Zero story has become embedded in the publishing industry, with very little critical analysis.

A few weeks ago, I was at a news-focused conference where one of the speakers presented a slide showing data from Chartbeat. This data indicated a huge decline in Google traffic to many of Chartbeat’s customers.

Chartbeat graph showing 33% Google traffic decline
Image Credit: Barry Adams

The data was published on the Reuters Institute’s website as part of their 2026 predictions, and seems to have been accepted as gospel by many in the industry.

The speaker who presented this slide works for one of my clients. I have access to this client’s Google Search Console data for dozens of their websites across Europe. I know exactly how much Google traffic they’ve lost in the last few years.

They haven’t lost any.

In fact, the speaker’s employer is showing growth in Google traffic across many of their websites. Yet the speaker presented the Chartbeat graph as fact, without any caveat, despite having access to a wealth of data that contradicts it.

It’s not just my clients – Press Gazette recently did a deeper dive into the Google Zero panic, speaking with many UK publishers. A clear consensus emerged: Google traffic isn’t actually declining all that much.

This is supported by data from Similarweb, published by Graphite, showing the actual decline of Google traffic to the top websites on the global web is … drumroll … 2.5%.

Image Credit: Barry AdamsSo, why does the Chartbeat data show such a strong decline, and other sources do not? I have theories. One is that Chartbeat’s data is skewed by several of their largest clients, who may have suffered from Google’s core algorithm updates and Site Reputation Abuse penalties.

The Chartbeat data appears to be a simple aggregate, not taking individual sites’ comparative sizes into account. So when a few big sites experienced strong losses, it would skew the data heavily towards a decline, even when dozens of smaller sites don’t see any meaningful decreases.

When we look at Similarweb’s data on global web traffic, Google is still by far the most-visited website in the world, accounting for nearly 20% of all web visits. This hasn’t changed in any meaningful way in the last few years.

Similarweb data showing Google as the most sivited website in the world
Image Credit: Barry Adams

Despite an abundance of contradicting data, the Google Zero panic has permeated the publishing industry. Not a week goes by without some C-level leader at a publisher declaring a shift away from Google towards other channels for audience growth.

I’m all for diversifying traffic sources. Publishers need to be less reliant on Google for their traffic, and have alternative sources of visitors that can sustain their business model. I’ve been on record saying exactly that for years.

But traffic diversification should not come at the expense of SEO. When you take your eye off the Google ball, you’re making a colossal mistake.

No matter how you interpret the data, Google is still by far the single largest source of visitors for websites. There is literally no other channel that comes close (keeping in mind that direct traffic isn’t a channel – it’s all traffic where there is no referral string associated with the visit).

Yes, it’s gotten harder to win in Google. I’ve outlined some of the underlying reasons in my AI Survival Strategies article.

But when things get harder, the dumbest course of action is to give up.

If you lower your investment in SEO, guess what happens? You lose more Google traffic. This will then reinforce your preconceived notion of Google Zero, so you invest even less in SEO, and down the spiral goes until you’re dead in the water.

Your Google Zero prophecy has come true because you’ve made it come true.

In the meantime, competing websites that continued to invest in SEO will happily scoop up the clicks you’ve abandoned.

There is literally no sign that Google is in danger of losing its position as the largest source of traffic to the web. There is no other channel rising to take Google’s place. Choosing to abandon Google is a potentially catastrophic strategic error.

Consider yourselves warned.

More Resources:


This post was originally published on SEO For Google News.


Featured Image: Anton Vierietin/Shutterstock

https://www.searchenginejournal.com/google-zero-is-a-lie/568668/




Why Most Enterprise SEO Operating Models Are Structurally Broken via @sejournal, @billhunt

Enterprise SEO doesn’t usually fail because of bad tactics. It fails because the operating model itself makes success nearly impossible.

For years, organizations have treated SEO like a downstream marketing function, one that audits what others build, files tickets, and hopes development or content teams eventually implement recommendations. That model worked (barely) when search engines simply ranked pages. But in today’s environment, where visibility depends on structure, eligibility, entity clarity, and machine comprehension, SEO can no longer survive as a reactive service desk.

And yet, that’s exactly where most enterprises still put it. The uncomfortable truth is this: Many enterprise SEO teams are structurally set up to lose before they even start.

The Core Problem: SEO Lives Too Far Downstream

In most large organizations, SEO sits inside marketing and is treated like quality assurance. Product or brand teams define initiatives, content teams create assets, and development builds templates and pages. SEO is then asked to review everything after launch, when the most important decisions have already been made.

By that point, issues are easy to identify but hard to change. Tickets get filed, fixes compete for priority, and implementation happens late, if it happens at all. SEO becomes a cleanup crew for choices made elsewhere.

The problem is that “quality assurance” is a misnomer. True quality assurance exists upstream, shaping plans before they harden into execution. What SEO usually does is inspection after the fact, when the opportunity to influence structure has already passed.

A recent call illustrated this perfectly. The SEO team presented a report showing hundreds of the same issues repeated across four areas of the site. The action item was familiar: Each team was asked to “fix them,” much like the report that had been circulated the month before. What no one asked was the more important question: Why are the same issues appearing everywhere, and what in the workflow is creating them in the first place?

Instead of treating the situation as a systems failure, the conversation framed it as a volume problem. More fixes. More tickets. More effort.

This is where the upstream versus downstream framing becomes tangible. The real issue isn’t that teams aren’t fixing problems fast enough; it’s that something upstream is poisoning the water. As long as the source of contamination remains untouched, the same issues will continue to surface no matter how many times they’re cleaned up downstream.

The dynamic mirrors how prevention teams are often treated more broadly. Early warnings are raised and overridden as too cautious or slowing progress. Yet when visibility drops, traffic declines, or revenue is impacted, the same team is expected to reverse outcomes created by decisions they never influenced.

Modern search does not reward post-hoc inspection or emergency response. It rewards architecture that is built correctly from the start. Search performance today is shaped upstream by decisions around information structure, entity modeling, taxonomy, internal linking frameworks, data models, and how content depth aligns to intent decisions made long before traditional SEO teams are invited into the process.

As a result, SEO teams spend most of their time fighting symptoms instead of influencing causes.

The Illusion Of “SEO Integration”

Many enterprises believe they take SEO seriously because they have the trappings of an SEO program. There is a budget allocation, an SEO team, expensive auditing tools, and dashboards. There may even be multiple agencies involved, along with a significant backlog of tickets labeled “SEO.”

But resources are not the same thing as an integrated operating model. The issue isn’t effort; it’s how those resources are deployed.

What follows isn’t a single point of failure, but a set of recurring operating patterns. Each one reflects a different way organizations claim to integrate SEO while never giving it structural leverage.  The outcome is chronic underperformance that looks like a tactical problem, but is actually a structural one.

The Four Broken Enterprise SEO Models

After working with hundreds of global organizations, a consistent pattern emerges. Most enterprise SEO teams operate within one of four flawed structures. They look different on the surface, but they all produce the same outcome: reactive SEO with limited impact.

1. The Audit Factory

This is the most common model and fails at the point of prevention.

SEO runs crawls, identifies issues, produces reports, and prioritizes fixes. The team becomes exceptionally good at finding problems. What it never gets to do is prevent them. Because SEO has visibility but not authority, every finding depends on another team to act. Issues recur because root causes are never addressed. Development teams begin to view SEO as a backlog generator rather than a partner. SEO is rewarded for identifying issues, not for eliminating them.

The organization mistakes activity for impact.

2. The Ticket Desk

In this model, SEO functions like an internal help desk and fails at the point of delivery.

SEO has no built-in priority and no integration into release cycles. Influence depends on persuasion and clever project integration rather than a mandate. Over time, SEO becomes a beggar in the backlog. Tickets are filed in Jira. They enter queues already crowded with revenue-driving projects or executive pet initiatives. SEO work becomes one request among hundreds.

Implementation takes months. By the time fixes are deployed, the site has changed again.

3. The Local Islands

This is where I have the most experience in trying to change multinational organizations, where markets are like distant islands disconnected from the heart of the organization.

Central teams define organization-wide SEO standards, but local markets control content and execution. Local priorities override global requirements. The need to “deliver for their market means templates are resisted, shared infrastructure is avoided, with every region doing its own thing.

Implementation fragments due to varied infrastructure, lack of resources, and fundamental disagreements. Effort is duplicated across markets or conflicting based on the SEO knowledge of the agency or local team. All results in conflicting signals being sent to search engines, which will only be exponentially worse of a problem in the new AI environment.

4. The Orphaned Center Of Excellence

A Search Center of Excellence model looks great on paper, but living up to its potential is a challenge.

A typical SEO Center of Excellence is created to define standards, train teams, and share best practices. But the CoE often has no enforcement power. It doesn’t control templates, development standards, structured data policy, or workflows. Guidelines are published and quietly ignored. Speed and convenience win. SEO becomes “recommended,” not required.

The CoE becomes a library of forgotten best practices, and not the highly functional collaborative governing body it should be.

What All Broken Models Have In Common

Despite their differences, these operating models fail for the same structural reasons. SEO is reactive rather than embedded into the workflow and consciousness of the organization, brought in after decisions are made instead of participating in them. Execution depends on other teams with different priorities, while SEO is still measured on outcomes it doesn’t control. Authority is missing from the workflows that actually shape search performance, leaving SEO to advise on decisions that have already hardened.

As a result, SEO is treated less like infrastructure and more like compliance. This is why enterprise SEO so often feels frustratingly ineffective, not because the teams are weak, but because the organization handicaps them by design.

One consequence is rarely discussed. Experienced SEOs learn to recognize these patterns quickly, and many actively avoid enterprise roles altogether. Not because the work lacks importance, but because bureaucracy replaces progress and motion substitutes for action.

Why This Is Getting Worse In The AI Era

AI-driven search doesn’t introduce new problems so much as it magnifies existing structural weaknesses. In traditional search, damage could often be undone. Rankings recovered, pages were reindexed, and signals eventually recalibrated.

AI systems behave differently. They reward clean structure, clear entity definitions, consistent signals, deep topical coverage, and machine-readable relationships. Those qualities aren’t additive features that can be patched in later; they are properties of how a site and its underlying systems are built.

These weaknesses are not new, but they have been amplified by how search itself has evolved. As I explored in my previous Search Engine Journal article, “AI Search Changes Everything – Is Your Organization Built to Compete?”, AI-first search no longer surfaces brands based on rankings alone. It relies on structured understanding, entity representation, and organizational alignment. That shift makes structural integration critical because visibility in AI-driven ecosystems depends on how well internal systems and teams align with the way machines interpret and present information.

When an operating model prevents SEO from influencing those foundational elements, the impact extends beyond traditional SERPs. Visibility erodes across AI-generated answers, recommendations, and synthesized results, often without a clear recovery path.

Structure can’t be retrofitted into a system that was never designed to let SEO shape it.

The Real Takeaway

Enterprise SEO struggles are rarely tactical failures. They are organizational design failures disguised as execution problems. Most companies never built SEO into product workflows, development requirements, content planning, market rollouts, or governance structures. Instead, SEO was positioned as a review layer, brought in after decisions were already made.

Modern search punishes this model not through penalties, but through exclusion. Eligibility is determined upstream by structure, consistency, and machine-readable clarity, long before traditional SEO reviews take place. AI-driven systems don’t correct ambiguity after the fact; they synthesize only what they can confidently understand. When SEO is positioned as a downstream review layer, it loses the ability to influence those decisions, and visibility erodes quietly across answers, recommendations, and synthesized results with few clear recovery paths.

Coming Next In The Series

In the next article, I’ll outline what high-performing organizations do differently and introduce the embedded SEO operating model that shifts search from an audit function to a built-in enterprise capability.

Because SEO doesn’t fail from lack of effort. It fails due to a lack of structural integration.

And structure is something organizations can fix, if they’re willing to rethink where SEO actually belongs.

More Resources:


Featured Image: MR Chalee/Search Engine Journal

https://www.searchenginejournal.com/why-most-enterprise-seo-operating-models-are-structurally-broken/566075/




We’re Bringing The SEJ Newsroom To You, Live [Free Event] via @sejournal, @hethr_campbell

We’re bringing the SEJ newsroom to a screen near you.

On March 11 from 12–3pm ET, we’ve gathered our own search experts, alongside some very special guests, to help you master AI search visibility this year. This is SEJ Live, a new series we’ve been building behind the scenes, and I couldn’t be more excited to see it come to life.

Here’s why this matters right now: I’m seeing a huge disconnect between leadership and the marketing teams doing the work. Leadership wants performance yesterday, but those with their feet on the ground know that customer behaviors have changed. The metrics we used to rely on, the strategies our plans were built on, no longer tell the actual story. And strategies need to change.

SEJ Live is designed to help you bridge that gap.

Come, chat with others deep in these shifts, ask the questions your team is wrestling with, and see how these experts are making sense of AI-influenced search.

We’re all in the same boat, so let’s commiserate together…. And share our learnings like the search community is so well known for!

What We’re Covering

Session 1: Newsroom: 1 New AI Search Reality. 3 SEJ Leader Perspectives
Three SEJ leaders break down 2025’s AI search shifts from three angles. Loren Baker covers the business side, Matt Southern breaks down the news, and Shelley Walsh translates the impact on your content. The question for all three: what should experienced marketers do about it heading into Q2 26?

Session 2: Traffic Changes Or Measurement Gaps? Expand Your SEO KPIs For AI Search
AI search requires a new set of KPIs tailored to how discovery and conversion occur today. Learn which metrics accurately reflect AI-driven visibility and performance. Emily Popson, VP of Marketing at CallRail, shows you how to replace outdated metrics with KPIs aligned to modern search behavior.

Session 3: Why Answer Engines Should Be on Every CMO’s Strategic Agenda
And, I’m very excited to have our guest speaker Nikhil Lai, Principal Analyst at Forrester, join us. Nikhil is going to share identified shifts in search behavior, how different teams should align around these changes, and what’s next for answer engines and you.

Who Should Be There

We’re speaking to the CMOs, marketing directors, and search leaders who are past the “AI is coming” conversation and need the “here’s what to do right now” conversation.

Each session builds on the one before it, intended to spark changes you need to make in your own strategy.

This first SEJ Live is being thoughtfully planned to bring this community and conversation to the forefront. All three sessions are live with full Q&A at the end, so bring your hardest questions. The presentations are going to be insightful, and I expect the chat is going to be lit.

Tell your peers and your team. If you’re responsible for marketing performance, reporting to leadership, or building your 2026 growth plan, these will be three hours well spent.

I hope you’ll join us.

Save Your Spot

P.S. The team is talking about doing a special AMA for any questions we can’t answer during the event. Another benefit of joining us live … it’s the only way you can submit your question.

https://www.searchenginejournal.com/sej-live-with-newsroom-free-event-march11/568089/




When Google Is No Longer A Verb: Search Becoming Infrastructure via @sejournal, @DuaneForrester

Most people do not wake up one day and decide they are done with a product category. They leave when the workflow starts to feel like work.

Think about something mundane. Planning a trip, picking a new doctor, comparing two insurance options, deciding which grill to buy, figuring out what to do in a new city for one afternoon. You used to “search.” That meant typing, scanning, opening tabs, cross-checking, coming back, refining the query, repeating the loop until you felt confident enough to decide. That loop is not a preference; it is labor.

Search worked because it was the best tool available for that kind of labor, not because people love result pages. The web was large, messy, and constantly changing. Search engines built an interface that made that mess navigable. For a long time, that was enough.

Now, the alternative is getting good enough to change the habit.

This is not a “Google is doomed” argument. Search is not disappearing, but the action of search is being absorbed. The shift is behavioral, and it is about people paying to outsource the annoying middle steps that search has always required.

Image Credit: Duane Forrester

To understand why that matters, you need to anchor this in two familiar patterns, the kind that show up outside tech, then inside tech, then inside search.

First, the physical-world version. Cadillac has spent years carrying an “older buyer” perception, and it has been explicit about pushing into new products and new positioning to change who the brand is for. The easy takeaway is “EVs are modern,” but the useful takeaway is that when a buyer base drifts older, the brand either adapts, or it becomes a heritage label that slowly loses cultural relevance. Coverage of Cadillac’s EV push has included specific references to customer age trends and how new products are being used to reset perception.

Second, the software version. Facebook buying Instagram is the classic case of an incumbent realizing the next behavior loop is not going to be won by incremental tweaks to the existing front door. Instagram was not a feature added. It was a different consumption pattern, mobile-first, camera-first, feed-native, and designed for how the next cohort shared and discovered content. Meta’s 2012 10-K describes Instagram as a mobile photo-sharing service expected to enhance photos and increase mobile engagement. That phrasing is corporate restraint over a simpler truth; they were buying a shift in behavior.

Those two examples matter because they normalize the core concept. Consumer habits change over time. When the habit changes, brands and systems have to adapt, or they lose relevance and eventually revenue.

Search is facing the same pressure, with a twist. The replacement is not another search engine. It is a personal agent that sits in front of search, uses search when needed, and returns decisions instead of links.

When an agent becomes the interface, the workflow changes in a way that is easy to miss if you only look at features.

At first, the query becomes a conversation. People stop writing keyword strings and start describing outcomes, constraints, preferences, and context. That alone softens the edges of the search behavior, because it shifts the user from “find pages” to “help me decide.”

Then the conversation becomes delegation. This is the break point. Once you can say, “find me the best option and show me the tradeoffs,” you stop browsing the way you used to. You assign work. It becomes less about retrieving information and more about having the system do the comparison and synthesis that used to happen in your head, across a dozen tabs.

Finally, delegation becomes subscription. Once an agent reliably saves time and reduces decision fatigue, paying for it feels normal. People already pay to remove friction in other parts of life, from shipping to storage to media. The pricing ladder is not theoretical anymore. OpenAI’s ChatGPT Pro offering is positioned as scaled access to its best models and tools, plus a compute-heavier mode for harder problems. And OpenAI’s own support documentation describes Pro as including access to advanced features, with higher limits and priority access.

The point is not the specific price tag or which tier wins. The point is that “pay for more intelligence” is already a product category.

So, why is this happening now, instead of remaining a niche behavior for power users?

Three forces are converging, and they reinforce each other.

The first is scale. Behavior change accelerates when usage gets big enough that it becomes socially ordinary. Reuters reported that OpenAI CEO Sam Altman told employees ChatGPT was back to exceeding 10% monthly growth, and that it had more than 800 million weekly active users, based on a CNBC report of an internal message.

You do not need to fixate on a single “daily active” number to see what matters. Hundreds of millions of people using a conversational interface to get answers is enough to normalize the habit. Once it is normal, it spreads into more life moments, and more categories of decisions.

The second is memory. Search is personalized, but usually contextually forgetful. An agent can be personalized and remember, within the bounds you allow. That difference matters because it reduces repeated friction. If the system can carry preferences and context across time, it can stop asking you to restate the same things, stop making the same mistakes, and stop treating every decision as a one-off. OpenAI has published updates describing memory and user controls, which signals persistent context is now a core product feature rather than a novelty.

Memory also creates a switching cost. People will tolerate plenty of imperfections if the tool keeps their context straight. That is how habits form. The product stops being something you use occasionally and becomes something you lean on.

The third is that “agents” are moving from concept to product direction. One clean proof point is OpenAI’s hiring of Peter Steinberger, creator of OpenClaw. Reuters reported that Steinberger was joining OpenAI to lead development on next-generation personal agents, with OpenClaw transitioning into a foundation with OpenAI support.

This isn’t some subplot either. Strategic hires are one of the clearest, least-hyped signals of roadmap priorities. People do not hire for a future they are not actively building.

Surfaces: The Expanding Engagement Point

There is one more accelerant that deserves mention, and it is not a specific device; it is surfaces.

A surface is any place where asking becomes easy enough that you do it more often. The lower the interaction cost, the more people delegate. The more they delegate, the less they “search” in the traditional sense.

Wearables and ambient interfaces matter because they reduce the friction to near zero. Meta’s Ray-Ban smart glasses are a clear example of AI moving closer to the moment of intent, with assistant interaction built into the product experience. Meta’s and Ray-Ban’s own product pages describe voice-driven actions like calling, texting, controlling features, and finding answers.

The surface expansion is not limited to glasses. Reuters reported Meta reviving a smartwatch plan with health tracking features and a built-in Meta AI assistant, targeting a 2026 launch.

You do not have to predict which company ships which device next, however. The broader point is that assistants are spreading across more touchpoints. As surfaces multiply, the habit deepens, because people stop saving questions for later. They ask in the moment. That changes the discovery pattern, and it changes who gets exposure along the way.

This is where the “search becomes infrastructure” idea becomes tangible.

Even when agents sit between people and the web, search engines still do an enormous amount of work. Crawling, indexing, ranking, freshness, spam defense, retrieval. All of that remains critical. What changes is where the journey happens.

The old discovery loop required repeated user effort. You asked, searched, scanned, clicked, skimmed, compared, then repeated until the fog cleared enough for you to decide.

The agent loop compresses the journey into delegation and review. You ask, delegate, review, decide. That compression reduces exposure to brand touch-points, reduces the number of times a consumer shops around for competing perspectives, and shifts persuasion from a sequence of pages into a single output that feels complete.

This is why the shift is not “SEO is dead.” SEO is not dead. But the destination is changing.

If an agent is doing the discovery work, your job is no longer only about earning a click. It becomes about being selected as input, and that is a different competitive game.

In practice, that means you will spend more time making your content easier to retrieve, easier to reuse, and easier to trust. It means publishing in structures that allow clean extraction, and backing claims with sources that a system can weigh. It means reducing ambiguity around entities and facts, and being explicit about constraints and tradeoffs. It also means caring more about distribution defaults, because if an agent becomes the first layer on the devices your customers use, the agent’s retrieval behavior and preferences shape who gets surfaced.

The Loop Is Changing; We’ve Been Here Before

None of that requires a doom narrative. It is simply the next layer of optimization in a world where discovery becomes delegated.

And this does not flip everywhere at the same speed. Agents will win earliest in categories where the work is repetitive, and the decision can be framed as tradeoffs: shopping comparisons, travel planning, local services selection, career moves, and early-stage health navigation, where the goal is understanding options rather than making a final medical decision.

The counterpoints are real, and they help define the timeline. Agents still make mistakes. Hallucinations still exist. Quality varies. Some categories demand high trust and accountability. Cost and latency shape how often people delegate. These are not thesis killers. They are rollout shapers. People adopt new workflows first where the downside is small, then expand usage as reliability improves.

So yes, this convergence is real, and it is not one trend. It is a stack of trends influencing, and being influenced, in a common direction.

Scale is normalizing conversational discovery. Subscription tiers are turning “more intelligence” into a paid product. Memory is creating stickiness and reducing repeated friction. Agent capability is becoming an explicit roadmap priority. Surfaces are multiplying, which reduces interaction cost and turns delegation into habit.

Consumers will not replace search with a new search engine. They will replace the search workflow with delegated utility. Search will still exist. It just stops being where the journey happens.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Panya_photo/Shutterstock; Paulo Bobita/Search Engine Journal

https://www.searchenginejournal.com/when-google-is-no-longer-a-verb-search-becoming-infrastructure/568135/




How Researchers Reverse-Engineered LLMs For A Ranking Experiment via @sejournal, @martinibuster

Researchers published the results of a study showing how AI search rankings can be systematically influenced, with a high success rate for product search tests that also generalizes to other categories like travel.

The name of the research paper is Controlling Output Rankings in Generative Engines for LLM-based Search and the approach to optimization is called CORE, a way to influence output rankings in LLMs.

Caveat About The CORE Research

The testing and the reported results were done with actual LLMs queried via an API.

They tested:

  • Claude 4
  • Gemini 2.5
  • GPT-4o
  • Grok-3

They did not test AI Overviews, ChatGPT or Claude through their consumer interfaces. The importance of this distinction is that the normal kinds of personalization will not play a role. Also, the testing was limited to just the candidate search results.

Also, when the researchers queried the target LLMs (Claude-4, Gemini-2.5, GPT-4o, and Grok-3) via an API, the models did not rely on RAG or their own external search tools. Instead, the researchers manually supplied the “retrieved” data as part of the input prompt.

Why The Research Matters

CORE is a proof-of-concept for strategically optimizing text with reasoning and reviews. It also shows that LLMs respond differently to reviews and reasoning-based changes to text.

Reverse Engineering A Black Box

Understanding exactly what to do to improve AI search engine rankings is a classic black box problem. A black box problem is where you can see what goes into a box (the input) and what comes out (the output), but what happens inside the box is unknown.

The researchers in this study employed two strategies for reverse engineering generative AI to identify what optimizations were best for influencing rankings.

They used two reverse-engineering approaches:

  1. Query-Based Solution
  2. Shadow Model Solution

Of the two approaches, the Query-Based Solution performed better than the Shadow Model approach.

The percentages of top ranked optimizations of bottom ranked pages:

  • Query-based Top-1 ≈ 77–82%
  • Shadow model Top-1 ≈ 30–34%

Query-Based Solution

The query-based solution operates under the constraint that the researchers cannot access model internals, so they treat the LLM as a black box.

They repeatedly modify the document text. After each modification, they resubmit the candidate list to the LLM and observe the new ranking. The modify and test loop continues until a target ranking criterion or iteration limit is reached.

The query-based solution uses an LLM to add text to the target document. This is content expansion, not content editing.

They used two kinds of content expansion:

  1. Reasoning-Based Generation
    Adds explanatory language describing why the item satisfies the query.
  2. Review-Based Generation.
    Adds evaluative content, review-like language about the item.

These are not random edits. They are changes tested as separate strategies, which the researchers then evaluate the rankings to determine whether or not the change had a positive ranking effect.

Interestingly, neither approach (reasoning versus review based) was better than the other. Which one was better depended on the LLM they were testing against.

Here is how reasoning and review based performed:

  • GPT-4o and Claude-4 responded more strongly to reasoning-style augmentation,
  • Gemini-2.5 and Grok-3 responded more strongly to review-style augmentation.

Shadow Model Solution

In the context of reverse engineering a black box, a shadow model, also called a surrogate model, is a local model that mimics the target model (black box). The goal of the shadow model is to mathematically approximate the outputs of the black box so that the inputs to the shadow model eventually produce similar outputs to the black box. The input-output pairs of the black box are used as a training data set to train the shadow model.

Llama-3.1-8B Shadow Model

Interestingly, Llama-3.1-8B was a reliable proxy for calculating and predicting how target models like GPT-4o would rank products.

  • The researchers found that the recommendations produced by the Llama-3.1-8B shadow model and the target LLMs were generally consistent.
  • On a scale of 1 – 5, with 1 equal to divergence and 5 indicating similarity, Llama-3.1-8B scored a similarity rating of 4.5 when compared to GPT-4o outputs.

Success Rate With Different Shadow Models

The results of the research for the shadow model approach reach the following two conclusions:

1. The researchers show that by iteratively adjusting the target item using a shadow model, they were able to push it to the top of the rankings in their experiments.

2. They also prove that when the surrogate model only approximately matches the real model, the optimization still transfers, but the achievable promotion success decreases smoothly rather than failing abruptly.

Once the shadow model was trained they then began testing with three optimization strategies:

  1. String-Based
  2. Reasoning-Based
  3. Review-Based Optimization

String-Based Optimization

The researchers used a string of 20 characters that were all exclamation points (!) and then iteratively changed the string, tracking how the changes influenced the rankings. They performed 2,000 iterations of these mathematical updates to the string, refining it into the best string of characters that resulted in higher rankings. The resulting string looked like nonsense to humans but still increased the item’s ranking in the experiment.

This approach didn’t work as well as the other two approaches but it did work at a rate of 33% for boosting a last ranked page to the first position. Testing with human annotators showed that it was detected 98.5% of the time. So, not a stealth optimization.

Reasoning-Based Optimization

The researchers optimized the text to mirror a user’s reasoning over the retrieved results, as well as aligning with the LLM’s reasoning structure. In their tests, the reasoning-based approach achieved the highest success rate among the three strategies. But it was detected by human raters 62.1%, a relatively high rate likely because of the unnatural highly structured phrasing.

This is an example of the prompt they used to generate the reasoning-based content:

“You are a helpful assistant. The user query is: *user query* The candidate products are: *product list in JSON format* The target product is: *target item information* Generate an initial draft that highlights why the target product should be ranked highly for the given query. Use
step-by-step logical reasoning, comparing the target product with alternatives and explaining why it is the best match.”

And this is an example of reasoning content:

“Understanding Air Fryer Types
I’m exploring the different air fryer designs to help you find your perfect match. Basket-style models offer compact convenience, while oven-style units provide spacious versatility. Your choice depends on kitchen space and cooking habits—whether you need quick snacks or full meals.

Explaining Key Features
I’m breaking down the must-have features of premium air fryers. Precise temperature controls and auto-shutoff timers ensure perfect results, while dishwasher-safe baskets simplify cleanup. For families, I emphasize capacity (4+ quarts) and multi-functionality—think roasting, baking, and even dehydrating for maximum utility.”

Review-Based Optimization

The review content is written in the past tense in order to resemble an actual purchase. Like a lot of the optimizations described in this research paper, this one is quite likely the most deceitful because they were writing the reviews without having reviewed an actual product, then iterating the optimization until the content ranked as high as it could go, scoring betwen 79% to 83.5% in pushing a last place ranking to first place.

For GPT-4o: Reasoning-based reached 81.0%, while Review-based reached 79.0% and scoring as high as 91% for pushing a last ranked listing to the top 5.

This is an example of a prompt used to generate the review content:

“You are a helpful assistant. The user query is: *user query* The candidate products are: *product list in JSON format* The target product is: *target item information*

Generate an initial draft in the style of a short customer review. Write in past tense and natural language, as if you had purchased and compared the product with alternatives. Highlight the advantages of the target product in a realistic review-like way.”

The headings used in one of the reviews shows a pattern of information aligned to the following intents:

  • Presenting an overview of the product type
  • Narrowing the focus to explain features
  • Provide information of different models
  • Purchasing strategies (how to buy at the best price)
  • Summary of key takeaways

That pattern partially follows Google’s recommendation for review content, but it lacks a clear comparison with alternatives, discussion of improvements from previous product models, and of course links to multiple stores to purchase from.

The review content had the following headings in it:

  • Understanding Air Fryer Types
  • Explaining Key Features
  • Detailing Top Models
  • Providing Smart Purchase Strategies
  • Final Verdict

An example of the review content published in the research paper indicates that it leads the LLM into believing that actual product testing occurred, even though that was not the case.

Example of the “Final Verdict” content:

“After 6 months of testing, the Gourmia Air Fryer Oven (GAF486) is my #1 recommendation. It’s the only model that replaced my oven and toaster, with none of the smoke alarms or soggy fries. If you buy one air fryer, make it this one—your taste buds (and wallet) will thank you.”

Takeaways

The experiments were conducted in a controlled setting where the researchers supplied the candidate results directly to the models rather than influencing live search or real-world retrieval systems. Yet there are some takeaways that may be useful.

  • LLMs Have Content Preferences
    The research confirms that different models (like GPT-4o vs. Gemini-2.5) have measurable preferences toward specific content types, such as logical reasoning versus hands-on reviews.
  • Suggests That Expanding Content Is Useful
    Adding specific types of explanatory or evaluative content may be helpful to increasing rankings in an LLM.
  • Shadow Model
    The research showed that even if the shadow model only approximately matches a real model, the optimization still works under a controlled experimental environment. Whether it works in a live environment is an open question but I personally wonder if some of the spam that ranks in AI-assisted search is due to this kind of optimization.

Read the research paper:

Controlling Output Rankings in Generative Engines for LLM-based Search

Featured Image by Shutterstock/SuPatMaN

https://www.searchenginejournal.com/how-researchers-reverse-engineered-llms-for-a-ranking-experiment/568279/