7 problems facing Bing, Bard, and the future of AI search

  News, Rassegna Stampa
image_pdfimage_print

This week, Microsoft and Google promised that web search is going to change. Yes, Microsoft did it in a louder voice while jumping up and down and saying “look at me, look at me,” but both companies now seem committed to using AI to scrape the web, distill what it finds, and generate answers to users’ questions directly — just like ChatGPT.

Microsoft calls its efforts “the new Bing” and is building related capabilities into its Edge browser. Google’s is called project Bard, and while it’s not yet ready to sing, a launch is planned for the “coming weeks.” And of course, there’s the troublemaker that started it all: OpenAI’s ChatGPT, which exploded onto the web last year and showed millions the potential of AI Q&A.

Satya Nadella, Microsoft’s CEO, describes the changes as a new paradigm — a technological shift equal in impact to the introduction of graphical user interfaces or the smartphone. And with that shift comes the potential to redraw the landscape of modern tech — to dethrone Google and drive it from one of the most profitable territories in modern business. Even more, there’s the chance to be the first to build what comes after the web. 

But each new era of tech comes with new problems, and this one is no different. In that spirit, here are seven of the biggest challenges facing the future of AI search — from bullshit to culture wars and the end of ad revenue. It’s not a definitive list, but it’s certainly enough to get on with. 

A screenshot of the Bing UI. The user has asked “who did Ukraine’s Zelenskyy meet today.” The AI-compiled answer shows he met with the British parliament.

a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-bd dark:[&>a:hover]:shadow-underline-gray [&>a]:shadow-underline-gray-63 dark:[&>a]:text-gray-bd dark:[&>a]:shadow-underline-gray”>Image: The Verge

This is the big overarching problem, the one that potentially pollutes every interaction with AI search engines, whether Bing, Bard, or an as-yet-unknown upstart. The technology that underpins these systems — large language models, or LLMs — is known to generate bullshit. These models simply make stuff up, which is why some argue they’re fundamentally inappropriate for the task at hand.  

The biggest problem for AI chatbots and search engines is bullshit

These errors (from Bing, Bard, and other chatbots) range from inventing biographical data and fabricating academic papers to failing to answer basic questions like “which is heavier, 10kg of iron or 10kg of cotton?” There are also more contextual mistakes, like telling a user who says they’re suffering from mental health problems to kill themselves, and errors of bias, like amplifying the misogyny and racism found in their training data.

These mistakes vary in scope and gravity, and many simple ones will be easily fixed. Some people will argue that correct responses heavily outnumber the errors, and others will say the internet is already full of toxic bullshit that current search engines retrieve, so what’s the difference? But there’s no guarantee we can get rid of these errors completely — and no reliable way to track their frequency. Microsoft and Google can add all the disclaimers they want telling people to fact-check what the AI generates. But is that realistic? Is it enough to push liability onto users, or is the introduction of AI into search like putting lead in water pipes — a slow, invisible poisoning? 

Bullshit and bias are challenges in their own right, but they’re also exacerbated by the “one true answer” problem — the tendency for search engines to offer singular, apparently definitive answers.