Report describes Apple’s “organizational dysfunction” and “lack of ambition” in AI

A Siri logo in an iOS interface near the iPhone's dock
Enlarge / Siri, Apple’s sort-of-AI assistant, pops up in iOS.
Samuel Axon

A new behind-the-scenes report in The Information details Apple’s struggles to keep up with AI features and innovation amid the rise of large language models (LLMs) that drive groundbreaking tools like ChatGPT.

The article focuses on the efforts by the company’s AI chief since 2018, John Giannandrea, to bring order to a fragmented AI group and make Apple more competitive with companies like Google, from which Giannandrea defected.

In some ways, The Information’s piece is a roundup or a confirmation of what we already know—like Apple employees’ frustrations with the limitations of Siri’s underlying technology, which had been previously reported—but it calls on new sources to add additional context and depth to the narrative.

For example, it reveals that the team that has been working on Apple’s long-in-development mixed reality headset was so frustrated with Siri that it considered developing a completely separate, alternative voice control method for the headset.

But it goes beyond just recounting neutral details; rather, it lays all that information out in a structured case to argue that Apple is ill-prepared to compete in the fast-moving field of AI.

Think different, indeed

As Google restructures itself to pour efforts into products like Bard and Microsoft injects ChatGPT and related AI features into a wide variety of products from Bing to Word to GitHub, Apple’s recent approach to AI has been different; it has focused almost exclusively on practical applications in features for the iPhone. The emphasis is on using machine learning to improve palm detection on the iPad, give iPhone users more neat photo editing tricks, and improve suggestions in Apple’s content-oriented apps, among other similar things.

That’s a different tack than the ambitious, blue sky experimentation and innovation you see at companies like OpenAI, Microsoft, or Google. Apple has been comparatively conservative, seeking to use AI and machine learning as a tool to improve the user experience, not to truly reinvent how much of anything is done or disrupt existing industries.

In fact, The Information’s sources offer up numerous examples of senior Apple leadership putting the brakes on (or at least reining in) aggressive efforts within the company’s AI group for fear of seeing products like Siri present the same kinds of embarrassing factual errors or unhinged behavior that ChatGPT and its ilk have done. In other words, Apple isn’t keen on tolerating what many working in AI research and product development call “hallucinations.”

For example, Siri’s responses are not generative—they’re human-written and human-curated. Apple leadership has been hesitant to allow Siri developers to push the voice assistant toward detailed back-and-forth conversations like you see in the latest LLM-driven chatbots. Those are seen as more attention-grabbing than usefulness, and Apple is worried about being responsible for bad answers. https://arstechnica.com/?p=1935453