Google parts with top AI researcher after blocking paper, faces blowback

Former Google AI Research Scientist Timnit Gebru speaks onstage during Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California.
Enlarge / Former Google AI Research Scientist Timnit Gebru speaks onstage during Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California.
Kimberly White | Getty Images

Google struggled on Thursday to limit the fallout from the departure of a top artificial intelligence researcher after the Internet group blocked the publication of a paper on an important AI ethics issue.

Timnit Gebru, who had been co-head of AI ethics at Google, said on Twitter that she had been fired after the paper was rejected.

Jeff Dean, Google’s head of AI, defended the decision in an internal email to staff on Thursday, saying the paper “didn’t meet our bar for publication.” He also described Dr. Gebru’s departure as a resignation in response to Google’s refusal to concede to unspecified conditions she had set to stay at the company.

The dispute has threatened to shine a harsh light on Google’s handling of internal AI research that could hurt its business, as well as the company’s long-running difficulties in trying to diversify its workforce.

Before she left, Gebru complained in an email to fellow workers that there was “zero accountability” inside Google around the company’s claims it wants to increase the proportion of women in its ranks. The email, first published on Platformer, also described the decision to block her paper as part of a process of “silencing marginalised voices.”

One person who worked closely with Gebru said that there had been tensions with Google management in the past over her activism in pushing for greater diversity. But the immediate cause of her departure was the company’s decision not to allow the publication of a research paper she had coauthored, this person added.

The paper looked at the potential bias in large-scale language models, one of the hottest new fields of natural language research. Systems like OpenAI’s GPT-3 and Google’s own system, Bert, attempt to predict the next word in any phrase or sentence—a method that has been used to produce surprisingly effective automated writing and which Google uses to better understand complex search queries.

The language models are trained on vast amounts of text, usually drawn from the Internet, which has raised warnings that they could regurgitate racial and other biases that are contained in the underlying training material.

“From the outside, it looks like someone at Google decided this was harmful to their interests,” said Emily Bender, a professor of computational linguistics at the University of Washington, who co-authored the paper.

“Academic freedom is very important—there are risks when [research] is taking place in places that [don’t] have that academic freedom,” giving companies or governments the power to “shut down” research they don’t approve of, she added.

Bender said the authors hoped to update the paper with newer research in time for it to be accepted at the conference to which it had already been submitted. But she added that it was common for such work to be superseded by newer research, given how quickly work in fields like this is progressing. “In the research literature, no paper is perfect.”

Julien Cornebise, a former AI researcher at DeepMind, the London-based AI group owned by Google’s parent, Alphabet, said that the dispute “shows the risks of having AI and machine learning research concentrated in the few hands of powerful industry actors, since it allows censorship of the field by deciding what gets published or not.”

He added that Gebru was “extremely talented—we need researchers of her calibre, no filters, on these issues.” Gebru did not immediately respond to requests for comment.

Dean said that the paper, written with three other Google researchers, as well as external collaborators, “didn’t take into account recent research to mitigate” the risk of bias. He added that the paper “talked about the environmental impact of large models, but disregarded subsequent research showing much greater efficiencies.”

© 2020 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.

https://arstechnica.com/?p=1727632