The Facts About Google Click Signals, Rankings, And SEO via @sejournal, @martinibuster

Clicks as a ranking-related signal have been a subject of debate for over twenty years, although nowadays most SEOs understand that clicks are not a direct ranking factor. The simple truth about clicks is that they are raw data and, surprisingly, processed with some similarity to human rater scores.

Clicks Are A Raw Signal

The DOJ Antitrust memorandum opinion from September 2025 mentions clicks as a “raw signal” that Google uses. It also categorizes content and search queries as raw signals. This is important because a raw signal is the lowest-level data point which is processed into higher level ranking signals or used for training a model like RankEmbed and its successor, RankEmbedBERT.

Those are considered raw signals because they are:

  • Directly observed
  • But not yet interpreted or used for training data

The DOJ document quotes professor James Allan, who gave expert testimony on behalf of Google:

“Signals range in complexity. There are “raw” signals, like the number of clicks, the content of a web page, and the terms within a query.

…These signals can be created with simple methods, such as counting occurrences (e.g., how many times a web page was clicked in response to a particular query). Id.
at 2859:3–2860:21 (Allan) (discussing Navboost signal) “

He then contrasts the raw signals with how they are processed:

“At the other end of the spectrum are innovative deep-learning models, which are machine-learning models that discern complex patterns in large datasets.

Deep models find and exploit patterns in vast data sets. They add unique capabilities at high cost.”

Professor Allan explains that “top-level signals” are used to produce the “final” scores for a web page, including popularity and quality.

Raw Signals Are Data To Be Further Processed

Navboost is mentioned several times in the September 2025 antitrust document as popularity data. It’s not mentioned in the context of clicks having a ranking effect on individal sites.

It’s referred to as a way to measure popularity and intent:

“…popularity as measured by user intent and feedback systems including Navboost/Glue…”

And elsewhere, in the context of explaining why some of the Navboost data is privileged:

“They are ‘popularity as measured by user intent and feedback systems including Navboost/Glue’…”

In the context of explaining why some of the Navboost data is privileged:

“Under the proposed remedy, Google must make available to Qualified Competitors …the following datasets:

1. User-side Data used to build, create, or operate the GLUE statistical model(s);

2. User-side Data used to train, build, or operate the RankEmbed model(s); and

3. The User-side Data used as training data for GenAI Models used in Search or any GenAI Product that can be used to access Search.

Google uses the first two datasets to build search signals and the third to train and refine the models underlying AI Overviews and (arguably) the Gemini app.”

Clicks, like human rater scores, are just a raw signal that is used further up the algorithm chain to train AI models to better able match web pages to queries or to generate a quality or relevance signal that is then added to the rest of the ranking signals by a ranking engine or a rank modifier engine.

70 Days Of Search Logs

The DOJ document makes reference to using 70 days of search logs. But that’s just eleven words in a larger context.

Here is the part that is frequently quoted:

“70 days of search logs plus scores generated by human raters”

I get it, it’s simple and direct. But there is more context to it:

“RankEmbed and its later iteration RankEmbedBERT are ranking models that rely on two main sources of data: [Redacted]% of 70 days of search logs plus scores generated by human raters and used by Google to measure the quality of organic search results.”

The 70 days of search logs are not click data used for ranking purposes in Google, AI Mode, or Gemini. It’s data in aggregate that is further processed in order to train specialized AI models like RankEmbedBERT that in turn rank web pages based on natural language analysis.

That part of the DOJ document does not claim that Google is directly using click data for ranking search results. It’s data, like the human rater data, that’s used by other systems for training data or to be further processed.

What Is Google’s RankEmbed?

RankEmbed is a natural language approach to identifying relevant documents and ranking them.

The same DOJ document explains:

“The RankEmbed model itself is an AI-based, deep-learning system that has strong natural-language understanding. This allows the model to more efficiently identify the best documents to retrieve, even if a query lacks certain terms.”

It’s trained on less data than previous models. The data partially consists of query terms and web page pairs:

“…RankEmbed is trained on 1/100th of the data used to train earlier ranking models yet provides higher quality search results.

…Among the underlying training data is information about the query, including the salient terms that Google has derived from the query, and the resultant web pages.”

That’s training data for training a model to recognize how query terms are relevant to web pages.

The same document explains:

“The data underlying RankEmbed models is a combination of click-and-query data and scoring of web pages by human raters.”

It’s crystal clear that in the context of this specific passage, it’s describing the use of click data (and human rater data) to train AI models, not to directly influence rankings.

What About Google’s Click Ranking Patent?

Way back in 2006 Google filed a patent related to clicks called, Modifying search result ranking based on implicit user feedback. The invention is about the mathematical formula for creating a “measure of relevance” out of the aggregated raw data of clicks (plural).

The patent distinguishes between the creation of the signal and the act of ranking itself. The “measure of relevance” is output to a ranking engine, which then can add it to existing ranking scores to rank search results for new searches.

Here’s what the patent describes:

“A ranking Sub-system can include a rank modifier engine that uses implicit user feedback to cause re-ranking of search results in order to improve the final ranking
presented to a user of an information retrieval system.

User selections of search results (click data) can be tracked and transformed into a click fraction that can be used to re-rank future search results.”

That “click fraction” is a measure of relevance. The invention described in the patent isn’t about tracking the click; it’s about the mathematical measure (the click fraction) that results from combining all those individual clicks together. That includes the Short Click, Medium Click, Long Click, and the Last Click.

Technically, it’s called the LCIC (Long Click divided by Clicks) Fraction. It’s “clicks” plural because it’s making decisions based on the sums of many clicks (aggregate), not the individual click.

That click fraction is an aggregate because:

  • Summation:
    The “first number” used for ranking is the sum of all those individual weighted clicks for a specific query-document pair.
  • Normalization:
    It takes that sum and divides it by the total count of all clicks (the “second number”).
  • Statistical Smoothing:
    The system applies “smoothing factors” to this aggregate number to ensure that a single click on a “rare” query doesn’t unfairly skew the results, especially for spammers.

That 2006 patent describes it’s weighting formula like this:

“A base LCC click fraction can be defined as:

LCC_BASE=[#WC(Q,D)]/[#C(Q,D)+S0)

where iWC(Q.D) is the sum of weighted clicks for a query URL…pair, iC(Q.D) is the total number of clicks (ordinal count, not weighted) for the query-URL pair, and S0 is a smoothing factor.”

That formula describes summing and dividing the data from many users to create a single score for a document. The “query-URL” pair is a “bucket” of data that stores the click behavior of every user who ever typed that specific query and clicked that specific search result. The smoothing factor is the anti-spam part that includes not counting single clicks on rare search queries.

Even way back in 2006, clicks is just raw data that is transformed further up the chain across multiple stages of aggregation, into a statistical measure of relevance before it ever reaches the ranking stage. In this patent, the clicks themselves are not ranking factors that directly influence whether a site is ranked or not. They were used in aggregate as a measure of relevance, which in turn was fed into another engine for ranking.

By the time the information reaches the ranking engine, the raw data has been transformed from individual user actions into an aggregate measure of relevance.

  • Thinking about clicks in relation to ranking is not as simple as clicks drive search rankings.
  • Clicks are just raw data.
  • Clicks are used to train AI systems like RankEmbedBert.
  • Clicks are not directly influencing search results. They have always been raw data, the starting point for systems that use the data in aggregate to create a signal that is then mixed into ranking decision making systems at Google.
  • So yes, like human rater data, raw data is processed to create a signal or to train AI systems.

Read the DOJ memorandum in PDF form here.

Read about four research papers about CTR.

Read the 2006 Google patent, Modifying search result ranking based on implicit user feedback.

Featured Image by Shutterstock/Carkhe

https://www.searchenginejournal.com/the-facts-about-google-click-signals-rankings-and-seo/572827/




Crypto scam lures ships into Strait of Hormuz, falsely promising safe passage

Crypto scammers are targeting the thousands of ships stranded near the Strait of Hormuz—and at least one ship that faced Iranian gunfire may have been tricked into believing it had paid Iran for safe passage.

The first warning of such a crypto scam came from the Greek maritime risk management company MARISKS on April 20, according to Reuters. The company alerted shipowners that scammers posing as Iranian authorities had sent messages to shipping companies asking for “transit fee” payments in bitcoin or tether.

That may be particularly confusing for shipping companies because of how Iran has asserted control over the Strait of Hormuz—a vital shipping channel and maritime chokepoint that normally allows Persian Gulf countries to provide one-fifth of the world’s oil and liquefied natural gas supply. Iranian authorities have demanded cryptocurrency payments from oil tankers to pass through the waterway and required ships to follow a route near Iran’s coastline to undergo inspection.

MARISKS identified one ship as having potentially fallen victim to crypto scams after it attempted to pass through the strait on April 18, although Reuters was unable to confirm that information. The incident supposedly occurred during a brief window when Iran claimed it was allowing ships to undergo inspection to pass through, but the ship in question turned back after Iranian military forces fired upon it. There are about 2,000 ships and 20,000 mariners still stranded near the strait.

That ship may not be alone in falling for a crypto scam while seeking safe passage. On April 22, the Liberia-flagged cargo ship Epaminondas, owned by the Greek company Technomar shipping and operated by the global shipping company MSC, was fired upon after it had reportedly received permission to pass through the strait, and authorities are checking whether the message purporting to grant safe passage “may have been fraudulent,” according to Ekathimerini.

https://arstechnica.com/security/2026/04/crypto-scam-lures-ships-into-strait-of-hormuz-falsely-promising-safe-passage/




Crypto scam lures ships into Strait of Hormuz, falsely promising safe passage

Crypto scammers are targeting the thousands of ships stranded near the Strait of Hormuz—and at least one ship that faced Iranian gunfire may have been tricked into believing it had paid Iran for safe passage.

The first warning of such a crypto scam came from the Greek maritime risk management company MARISKS on April 20, according to Reuters. The company alerted shipowners that scammers posing as Iranian authorities had sent messages to shipping companies asking for “transit fee” payments in bitcoin or tether.

That may be particularly confusing for shipping companies because of how Iran has asserted control over the Strait of Hormuz—a vital shipping channel and maritime chokepoint that normally allows Persian Gulf countries to provide one-fifth of the world’s oil and liquefied natural gas supply. Iranian authorities have demanded cryptocurrency payments from oil tankers to pass through the waterway and required ships to follow a route near Iran’s coastline to undergo inspection.

MARISKS identified one ship as having potentially fallen victim to crypto scams after it attempted to pass through the strait on April 18, although Reuters was unable to confirm that information. The incident supposedly occurred during a brief window when Iran claimed it was allowing ships to undergo inspection to pass through, but the ship in question turned back after Iranian military forces fired upon it. There are about 2,000 ships and 20,000 mariners still stranded near the strait.

That ship may not be alone in falling for a crypto scam while seeking safe passage. On April 22, the Liberia-flagged cargo ship Epaminondas, owned by the Greek company Technomar shipping and operated by the global shipping company MSC, was fired upon after it had reportedly received permission to pass through the strait, and authorities are checking whether the message purporting to grant safe passage “may have been fraudulent,” according to Ekathimerini.

https://arstechnica.com/security/2026/04/crypto-scam-lures-ships-into-strait-of-hormuz-falsely-promising-safe-passage/




Our newsroom AI policy

When we attribute a statement, a position, or a quote to a named source, that material comes from direct engagement with interviews, transcripts, published statements, or documents reviewed by the reporter. AI tools must not be used to generate, extract, or summarize material that is then attributed to a named source, whether as a direct quote, a paraphrase, or a characterization of someone’s views.

We don’t publish claims based solely on AI-generated summaries, and reporters may not represent any material as “reviewed” unless they have examined it directly.

Every author who uses AI tools in the course of reporting a story must disclose that use to their editors, and authors remain fully responsible for their content.

Images, audio, and video

Our visual content, including listing images, illustrations, and video, is produced by our editorial and art teams or sourced from photography services and wire providers. Our creative team may use AI tools in the production of certain visual material, but the creative direction and editorial judgment are human-driven.

We do not publish AI-generated images, audio, or video as authentic documentation of real events. We do not alter documentary media in ways that change their meaning. Standard production work, like color correction, cropping, and contrast adjustments, is fine.

When synthetic media is used in the context of reporting on AI, it will be clearly identified as AI-generated, with that disclosure placed as close to the material as possible.

Accountability is non-negotiable

Anyone who uses AI tools in our editorial workflow is responsible for the accuracy and integrity of the resulting work. This responsibility cannot be transferred to colleagues, editors, or the tools themselves. More broadly, maintaining the standards in this policy is a shared obligation across our editorial operation.

These standards have governed our editorial work since AI tooling became available. When violations occur, we take action. We’re publishing this reader-facing version because our readers deserve to see the rules we hold ourselves to, not just trust that they exist.

This policy was last updated April 22, 2026.

https://arstechnica.com/staff/2026/04/our-newsroom-ai-policy/




Our newsroom AI policy

When we attribute a statement, a position, or a quote to a named source, that material comes from direct engagement with interviews, transcripts, published statements, or documents reviewed by the reporter. AI tools must not be used to generate, extract, or summarize material that is then attributed to a named source, whether as a direct quote, a paraphrase, or a characterization of someone’s views.

We don’t publish claims based solely on AI-generated summaries, and reporters may not represent any material as “reviewed” unless they have examined it directly.

Every author who uses AI tools in the course of reporting a story must disclose that use to their editors, and authors remain fully responsible for their content.

Images, audio, and video

Our visual content, including listing images, illustrations, and video, is produced by our editorial and art teams or sourced from photography services and wire providers. Our creative team may use AI tools in the production of certain visual material, but the creative direction and editorial judgment are human-driven.

We do not publish AI-generated images, audio, or video as authentic documentation of real events. We do not alter documentary media in ways that change their meaning. Standard production work, like color correction, cropping, and contrast adjustments, is fine.

When synthetic media is used in the context of reporting on AI, it will be clearly identified as AI-generated, with that disclosure placed as close to the material as possible.

Accountability is non-negotiable

Anyone who uses AI tools in our editorial workflow is responsible for the accuracy and integrity of the resulting work. This responsibility cannot be transferred to colleagues, editors, or the tools themselves. More broadly, maintaining the standards in this policy is a shared obligation across our editorial operation.

These standards have governed our editorial work since AI tooling became available. When violations occur, we take action. We’re publishing this reader-facing version because our readers deserve to see the rules we hold ourselves to, not just trust that they exist.

This policy was last updated April 22, 2026.

https://arstechnica.com/staff/2026/04/our-newsroom-ai-policy/




Lawsuit: Nintendo is getting tariff refunds—its customers should get them instead

The lawsuit also alleges violations of the Washington Consumer Protection Act, which prohibits unfair and deceptive acts. “Nintendo engaged in unfair acts by: (i) raising prices due to tariffs; (ii) failing to disclose that it intended to seek tariff refunds; and (iii) retaining tariff refunds despite having passed the costs to its customers,” the lawsuit said.

Of course, Nintendo didn’t know when it raised prices that the Supreme Court would strike down the tariffs the next year. It’s also unclear what it intends to do with tariff refunds that it will presumably receive sometime in the next 60 to 90 days.

Nintendo raised prices for Switch 2 accessories

The lawsuit points to price increases for Nintendo Switch 2 accessories that were announced in April 2025. The increases ranged from $1 to $10 per product. “For example, Nintendo raised the price of the Nintendo Switch 2 Pro Controller from $79.99 to $84.99 and the Nintendo Switch 2 Dock Set from $109.99 to $119.99,” the lawsuit said.

The lawsuit also mentions the August 2025 increases for the original Switch console, which ranged from $30 to $50 depending on the model. Nintendo President Shuntaro Furukawa told investors in May 2025 that “if tariffs are imposed, we recognize them as a part of the cost and incorporate them into the price,” the lawsuit said.

The lawsuit seeks a return “of all monies wrongfully obtained by Defendant.” It didn’t ask for a specific amount in damages, but said that federal court is the appropriate venue because the class includes people from multiple states and “the amount in controversy exceeds $5,000,000.” Nintendo of America is headquartered in the Washington district where the complaint was filed.

Even if the Trump administration issues all owed refunds, Nintendo and other companies will still have an ongoing tariff problem. Trump reacted to his Supreme Court loss by imposing a 10 percent tariff, claiming he has the authority to do so under the Trade Act of 1974, and more Trade Act tariffs could be on the way. States have sued to block Trump’s new tariffs, and there may be another long round of litigation over whether the president can issue tariffs under the Trade Act.

https://arstechnica.com/tech-policy/2026/04/lawsuit-nintendo-is-getting-tariff-refunds-its-customers-should-get-them-instead/




Lawsuit: Nintendo is getting tariff refunds—its customers should get them instead

The lawsuit also alleges violations of the Washington Consumer Protection Act, which prohibits unfair and deceptive acts. “Nintendo engaged in unfair acts by: (i) raising prices due to tariffs; (ii) failing to disclose that it intended to seek tariff refunds; and (iii) retaining tariff refunds despite having passed the costs to its customers,” the lawsuit said.

Of course, Nintendo didn’t know when it raised prices that the Supreme Court would strike down the tariffs the next year. It’s also unclear what it intends to do with tariff refunds that it will presumably receive sometime in the next 60 to 90 days.

Nintendo raised prices for Switch 2 accessories

The lawsuit points to price increases for Nintendo Switch 2 accessories that were announced in April 2025. The increases ranged from $1 to $10 per product. “For example, Nintendo raised the price of the Nintendo Switch 2 Pro Controller from $79.99 to $84.99 and the Nintendo Switch 2 Dock Set from $109.99 to $119.99,” the lawsuit said.

The lawsuit also mentions the August 2025 increases for the original Switch console, which ranged from $30 to $50 depending on the model. Nintendo President Shuntaro Furukawa told investors in May 2025 that “if tariffs are imposed, we recognize them as a part of the cost and incorporate them into the price,” the lawsuit said.

The lawsuit seeks a return “of all monies wrongfully obtained by Defendant.” It didn’t ask for a specific amount in damages, but said that federal court is the appropriate venue because the class includes people from multiple states and “the amount in controversy exceeds $5,000,000.” Nintendo of America is headquartered in the Washington district where the complaint was filed.

Even if the Trump administration issues all owed refunds, Nintendo and other companies will still have an ongoing tariff problem. Trump reacted to his Supreme Court loss by imposing a 10 percent tariff, claiming he has the authority to do so under the Trade Act of 1974, and more Trade Act tariffs could be on the way. States have sued to block Trump’s new tariffs, and there may be another long round of litigation over whether the president can issue tariffs under the Trade Act.

https://arstechnica.com/tech-policy/2026/04/lawsuit-nintendo-is-getting-tariff-refunds-its-customers-should-get-them-instead/




RFK Jr. won’t back CDC director on vaccines as agency scraps positive data

Kennedy replied without hesitation: “I’m not going to make that kind of commitment.”

“Because you probably won’t,” Ruiz said. “You’ll probably fire her, too, like you did director Monarez, because you will not accept the recommendations based on science.”

Suppressed science

A report from the Washington Post on Wednesday seemed to support Ruiz’s concern for Kennedy’s continued anti-vaccine interference. The Post reported that the CDC has decided to entirely scrap a scientifically vetted study that identified significant health benefits from the 2025–2026 COVID-19 vaccine. While Kennedy has called COVID-19 vaccines the “deadliest vaccine ever made,” the study found that the shot reduced the risk of emergency department or urgent care visits by 50 percent, and reduced the risk of COVID-19-associated hospitalizations by 55 percent, compared with healthy adults who did not get this season’s shot.

The study had previously cleared scientific review and was set for publication in the agency’s Morbidity and Mortality Weekly Report on March 19. But the study was instead held up by acting CDC director Jay Bhattacharya, who said he had concerns about the study’s methods. The study used a standard, widely accepted design. A flu vaccine study using the same design was published in the MMWR earlier in March.

Last month, Andrew Nixon, spokesperson for the Department of Health and Human Services, said that CDC scientists were working to address Bhattacharya’s concerns. But this week, Nixon told the Post that an “editorial assessment identified concerns regarding the methodological approach to estimating vaccine effectiveness and the manuscript was not accepted for publication.”

The Post’s sources said that was not an accurate account of what happened.

https://arstechnica.com/health/2026/04/rfk-jr-doubles-down-on-vaccine-meddling-as-cdc-junks-scientific-study/




Microsoft issues emergency update for macOS and Linux ASP.NET threat

Microsoft released an emergency patch for its ASP.NET Core to fix a high-severity vulnerability that allows unauthenticated attackers to gain SYSTEM privileges on devices that use the Web development framework to run Linux or macOS apps.

The software maker said Tuesday evening that the vulnerability, tracked as CVE-2026-40372, affects versions 10.0.0 through 10.0.6 of the Microsoft.AspNetCore.DataProtection NuGet, a package that’s part of the framework. The critical flaw stems from a faulty verification of cryptographic signatures. It can be exploited to allow unauthenticated attackers to forge authentication payloads during the HMAC validation process, which is used to verify the integrity and authenticity of data exchanged between a client and a server.

Beware: Forged credentials survive patching

During the time users ran a vulnerable version of the package, they were left open to an attack that would allow unauthenticated people to gain sensitive SYSTEM privileges that would allow full compromise of the underlying machine. Even after the vulnerability is patched, devices may still be compromised if authentication credentials created by a threat actor aren’t purged.

“If an attacker used forged payloads to authenticate as a privileged user during the vulnerable window, they may have induced the application to issue legitimately-signed tokens (session refresh, API key, password reset link, etc.) to themselves,” Microsoft said. “Those tokens remain valid after upgrading to 10.0.7 unless the DataProtection key ring is rotated.”

Microsoft describes ASP.NET Core as a “high-performance” web development framework for writing .Net apps that run on Windows, macOS, Linux, and Docker. The open-source package is “designed to allow runtime components, APIs, compilers, and languages [to] evolve quickly, while still providing a stable and supported platform to keep apps running.”

https://arstechnica.com/security/2026/04/microsoft-issues-emergency-update-for-macos-and-linux-asp-net-threat/




Microsoft issues emergency update for macOS and Linux ASP.NET threat

Microsoft released an emergency patch for its ASP.NET Core to fix a high-severity vulnerability that allows unauthenticated attackers to gain SYSTEM privileges on devices that use the Web development framework to run Linux or macOS apps.

The software maker said Tuesday evening that the vulnerability, tracked as CVE-2026-40372, affects versions 10.0.0 through 10.0.6 of the Microsoft.AspNetCore.DataProtection NuGet, a package that’s part of the framework. The critical flaw stems from a faulty verification of cryptographic signatures. It can be exploited to allow unauthenticated attackers to forge authentication payloads during the HMAC validation process, which is used to verify the integrity and authenticity of data exchanged between a client and a server.

Beware: Forged credentials survive patching

During the time users ran a vulnerable version of the package, they were left open to an attack that would allow unauthenticated people to gain sensitive SYSTEM privileges that would allow full compromise of the underlying machine. Even after the vulnerability is patched, devices may still be compromised if authentication credentials created by a threat actor aren’t purged.

“If an attacker used forged payloads to authenticate as a privileged user during the vulnerable window, they may have induced the application to issue legitimately-signed tokens (session refresh, API key, password reset link, etc.) to themselves,” Microsoft said. “Those tokens remain valid after upgrading to 10.0.7 unless the DataProtection key ring is rotated.”

Microsoft describes ASP.NET Core as a “high-performance” web development framework for writing .Net apps that run on Windows, macOS, Linux, and Docker. The open-source package is “designed to allow runtime components, APIs, compilers, and languages [to] evolve quickly, while still providing a stable and supported platform to keep apps running.”

https://arstechnica.com/security/2026/04/microsoft-issues-emergency-update-for-macos-and-linux-asp-net-threat/