Google’s Pandu Nayak shares his roadmap for MUM and how it can help the company handle more complex queries

For the most part, search engines have operated the same way for the last two decades. They’ve improved at determining intent, providing relevant results and incorporating different verticals (like image, video or local search), but the premise remains the same: input a text query and the search engine will return a mix of organic links, rich results and ads.

With more recent advancements, like BERT, search engines have increased their language processing capabilities, which enable them to better understand queries and return more relevant results. Even more recently, Google unveiled its Multitask Unified Model (MUM), a technology that is 1,000 times more powerful than BERT, according to Google, and combines language understanding with multitasking and multimodal input capabilities.

In a chat with Search Engine Land, Pandu Nayak, VP of search at Google, outlined how MUM might fundamentally change the way users interact with its search engine, the roadmap for MUM as well as what Google is doing to ensure that the technology is applied responsibly.

MUM, Google’s latest milestone in language understanding

It’s easy to classify MUM as a more advanced version BERT, especially since Google is treating it as a similarly important milestone for language understanding and touting it as being far more powerful than BERT. While the two are both based on transformer technology and MUM has BERT language understanding capabilities built into it, MUM is based on a different architecture (T5 architecture) and is capable of substantially more.

Training across more languages scales learning. “[MUM is] trained simultaneously across 75 languages,” Nayak said, “This is nice because it allows us to generalize from data-rich languages to languages with a paucity of data.” This may mean that MUM’s applications can be more easily transferred to more languages. If that’s true, it might help strengthen Google Search in those markets.

MUM isn’t limited to text. Another distinction is that MUM is multimodal, meaning that its capabilities aren’t limited to text, it can also use video and images as inputs. “Imagine taking a photo of your hiking boots and asking ‘Can I use these to hike Mt. Fuji?’” Prabhakar Raghavan, SVP at Google, said as a hypothetical example during the MUM unveiling at Google I/O, “MUM would be able to understand the content of the image and the intent behind your query.”

Prabhakar Ragavan discussing MUM at Google I/O
Prabhakar Raghavan providing examples of how MUM might be integrated into Google Search at Google I/O.

Multitasking also facilitates scaled learning. “MUM is also intrinsically multitasked,” Nayak said. The natural language tasks it can handle include (but are not limited to) ranking pages for a particular query, document review and information extraction. MUM can handle multiple tasks in two ways: On the training side and on the use side.

“By training it on multiple tasks, those concepts are being learned to be more robust and general,” explained Nayak, “That is, they apply across multiple tasks rather than being applied only to a single task and being brittle when applied to a different task.”

On the use side, Google does not envision MUM rolling out as a singular feature or launch in search: “We think of it as a platform on which different teams can build out different use cases,” Nayak said, adding, “The idea is that over the next few months, we’re going to see many, many teams within search using MUM to improve whatever tasks they were doing to help search, and the COVID vaccine example is a really good example of that.”

Google’s roadmap for MUM

Where we are now, the short-term. Google’s short-term goals for MUM largely focuses on knowledge transfer across languages. The first public application of MUM, in which it identified 800 variations of vaccine names across 50 languages in a matter of seconds, is a good representation of this stage of its rollout. It’s important to note that Google already had a subset of COVID vaccine names that would trigger the COVID vaccine experience in the search results, but MUM allowed it to get a much larger set of vaccine names, which enabled the search results to trigger in more situations, when appropriate.

And, as part of this short-term stage, teams within Google have begun to incorporate MUM into their projects, “We have tens of teams that are experimenting with MUM right now, many of them are finding great utility in what they’re seeing here,” Nayak said, declining to provide more specific details at this time.

Multimodal features planned for the medium-term future. “In the medium term, we think multimodality is where the action is — that’s going to be like a new capability for search that we have not had before,” Nayak said, expanding on the image search example that Prabhakar Raghavan first used at Google I/O.

In Nayak’s vision for MUM in search, he describes an interface in which users can upload images and ask text questions about those images. Instead of returning a simple answer that may result in a zero-click search, Nayak sees Google returning relevant results that bridge the gap between the uploaded image and the user’s query.

Although Google’s experiments with MUM have inspired confidence, Nayak was keen to emphasize that the exact implementation of these “medium-term” objectives, along with any specific timelines, is uncertain.

Connecting the dots for users over the long term. “In the longer term, we think that the promise of MUM really stems from its ability to understand language at a much deeper level,” Nayak said, adding, “I think it’ll support much deeper information understanding and we hope to be able to convert that deeper information understanding into more robust experiences for our users.”

In their current state, search engines struggle to surface relevant results for some specific and complex queries, like, for example, “I’ve hiked Mount Adams and I want to hike Mount Fuji next fall. What should I do differently to prepare?” “Today, if [a user] just went and typed that query into Google, there’s a very good chance it would not give any useful results . . . so what you would have to do is to break it up into individual queries that you can then sort of probe around and get the results and piece it together for yourself — we think MUM can help here,” Nayak said.

Continuing with the hiking example above, “We think MUM can take a piece of text [the search query] like that, which is this complex information need and break it up into these sort of individual information needs,” he said, suggesting that MUM’s language understanding capabilities could help Google provide results related to fitness training, Mt. Fuji’s terrain, climate and so on.

“Remember, we don’t have this working because this is long-term, but this is exactly the kind of thing that you’re doing in your head when you come up with individual queries and we think MUM can help us generate queries like this,” he said, “You can imagine we could issue multiple queries like this, get you results for them, maybe put in some text that connects all of this to the original, more complex question that you had — essentially organize this information . . . that shows what the connection is, so that you can now go in and read the article on the best gear for Mt. Fuji or the tips for altitude hiking or something like that in this richer way.”

One of the reasons why this is a long-term objective is because it requires a rethinking of why people come to Google with complex needs rather than individual queries, Nayak explained. Google would also have to convert the complex need, as expressed by a user’s search term, into a subset of queries and the results for those queries would have to be organized appropriately.

Who is driving development? When asked about who would be directing MUM’s development and implementation, Nayak explained that Google is aiming to develop novel search experiences but also allowing individual teams to use it for their own projects.

“We fully expect many teams within search to use MUM in ways that we had not even envisaged,” he said, “But we also have efforts to have novel, new search experiences and we have people investigating the possibilities for building these new experiences.” “What is abundantly clear to everyone, both existing teams and these teams looking at novel experiences, is that the base system seems extremely powerful and demonstrates a lot of promise. Now, it is up to us to convert that promise into great search experiences for our users — that’s where the challenge lies now,” he added.

MUM won’t be just a “question-answering system.” “This idea that maybe MUM is going to become a question-answering system — that is, you come to Google with a question and we just give you the answer — I’m here to tell you that is absolutely not the vision for MUM,” Nayak said, “And the reason is very simple: such a question-answering system for these complex needs that people have is just not useful.”

Nayak contrasted the complex intent queries that MUM may eventually help users navigate with the simpler, more objective searches that are often resolved right on the search results page: “I totally get it that if you ask a simple question, [for example,] “What is the speed of light?” that it deserves a simple, straightforward answer, but most needs that people have — this hiking example or you want to find a school for your child or you’re figuring out what neighborhood you want to live in — any sort of even moderately complex intent is just not well satisfied by a short, crisp answer,” he said.

“You’ve probably heard the statistic that every year since the beginning of Google, we have sent more traffic to the open web than in the previous year — we fully expect MUM to continue this trend,” he reiterated, adding, “There is no expectation that it will become this question-answering system.”

Mitigating the costs and risks of developing MUM

Developing models for search can have an ecological impact and requires large datasets. Google says it is aware of these considerations and is taking precautions to apply MUM responsibly.

Limiting potential bias in the training data. “These models can learn and perpetuate biases in the training data in ways that are not great if there are undesirable biases of any sort,” Nayak said, adding that Google is addressing this issue by monitoring the data that MUM is trained on.

“We don’t train MUM on the whole web corpus, we train it on a high-quality subset of the web corpus so that all the undesirable biases in low-quality content, in adult and explicit content, it doesn’t even have a chance to learn those because we’re not even presenting that content to MUM,” he said, acknowledging that even high-quality content can contain biases, which the company’s evaluation process attempts to filter out.

Internal evaluations. “When we launched BERT a year and a half ago, we did an unprecedented amount of evaluation in the many months leading up to the launch just to make sure that there were no concerning patterns,” Nayak said, “And any concerning patterns we detected there, we took steps to mitigate — I fully expect that, before we have a significant launch of MUM in search, we’ll do a significant amount of evaluation in the same way to avoid any sort of concerning patterns.”

Addressing the ecological costs. Large models can be both expensive and energy-intensive to build, which may result in a detrimental impact on the environment.

“Our research team recently put out quite a comprehensive and interesting paper about the climate impact of various large models built by our research team, as well as some models built outside it, such as GPT-3, and the article . . . points out that, based on the particular choice of model, the processers and data centers used, the carbon impact can be reduced as much as a thousandfold,” Nayak said, adding that Google has been carbon-neutral since 2007, “So, whatever energy is being used, the carbon impact has been mitigated just by Google.”

MUM has potential, now we wait and see how Google uses it

Nayak’s comments on MUM’s future and how he doesn’t foresee it becoming a “question-answering system” is significant because Google is acknowledging a concern that many search marketers have — but, it’s also a concern for regulators that seek to ensure that Google doesn’t unfairly prioritize its own products over those of competitors.

It’s possible that other search engines are also developing similar technologies, as we saw with Bing and its implementation of BERT nearly six months before Google. Right now, Google seems to be the first out of the gate and, with the efficiency displayed in MUM’s first outing, that could be an advantage that helps to preserve the company’s market share.  

Google’s roadmap for MUM provides marketers with context and a lot of possibilities to consider, but at this point, nothing is certain enough to begin preparing for. What we can expect, however, is that if the technology gets implemented and resembles the examples Google has shown us, the way users search may adapt to take advantage of those features. A shift in search behavior is also likely to mean that marketers will have to identify new opportunities in search and adapt their strategies, which is par for the course in this industry.


About The Author

George Nguyen is an editor for Search Engine Land, covering organic search, podcasting and e-commerce. His background is in journalism and content marketing. Prior to entering the industry, he worked as a radio personality, writer, podcast host and public school teacher.

http://feeds.searchengineland.com/~r/searchengineland/~3/XZXygrzHcRc/googles-pandu-nayak-shares-his-roadmap-for-mum-and-how-it-can-help-the-company-handle-more-complex-queries-350288