Google’s Widely Rolled Out AI Search Engine Spouts Misinformation
Mark your calendar for Mediaweek, October 29-30 in New York City. We’ll unpack the biggest shifts shaping the future of media—from tv to retail media to tech—and how marketers can prep to stay ahead. Register with early-bird rates before sale ends!
The world’s most popular search engine is getting the facts wrong.
Google’s decision to make its AI-generated search results, AI Overview, the default experience in the U.S. was met with swift criticism after people’s search queries have been plagued with errors, concerning advice and misinformation.
In one example, when searching “what is in Google’s AI dataset,” Google’s AI summary said its AI model was trained on child sexual abuse material.
Google also erroneously claimed that Barack Obama is Muslim, provided incorrect advice on treating rattlesnake bites, and suggested using glue in pizza cheese when people searched “cheese not sticking to pizza.”
“You can add 1/8 cup of non-toxic glue to the sauce to give it more tackiness,” Google answered.
The AI search engine also said geologists recommend eating one rock per day.
To be fair, many gen AI products start riddled with inaccuracies before they grasp the intricacies and nuances of human language and quickly learning. But Google’s haste to roll it out widely opens it up to more criticism.
“The pitfalls of infusing search with AI at this point run the gamut from creators who resist the use of their work to train models that could eventually diminish their relevance, to incorrect results put forth as fact,” said Jeff Ragovin, CEO of contextual targeting provider Semasio. “On this one, it looks like Google was a bit premature.”
The AI response on President Obama violated Google’s content policy which include careful considerations for content that may be explicit, hateful, violent, or contradictory of consensus on public interest topics, a Google spokesperson told ADWEEK. The tech giant has blocked the violating overview from appearing on that query.
“Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce,” the spokesperson said. “We’re taking swift action where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.“
As synthetic content grows, so does the likelihood of errors
As the amount of synthetically generated content increases on the web, so does the risk of inaccuracies. By 2026, experts project that 90% of online content will be AI-generated, raising concerns that AI Overview could unwittingly amplify misleading content.
Google’s push for AI-generated answers has also been controversial for denting publisher revenue, with some experiencing cuts in referral traffic by as much as 80% after using that content to train AI models.
On the flip side, publishers including Axel Springer, Dotdash Meredith, The Financial Times and most recently News Corp have struck deals with ChatGPT maker OpenAI to have content used in ChatGPT outputs.
“This is another demonstration of why Google, OpenAI and others should be aggressively pursuing licensing arrangements with premium publishers that have a track record of creating trusted content,” said Jason Kint, CEO of Digital Content Next. “And brands are proxies for that trust both with consumers and advertisers.”
https://www.adweek.com/media/google-ai-search-engine-misinformation-glue-pizza-eat-rocks/