Google ethics researcher’s departure renews worries the company is silencing whistleblowers
A prominent A.I. researcher has left Google, saying she was fired for criticizing the company’s lack of commitment to diversity, renewing concerns about the company’s attempts to silence criticism and debate.
Timnit Gebru, who was technical co-lead of a Google team that focused on A.I. ethics and algorithmic bias, wrote on Twitter that she had been pushed out of the company for writing an email to “women and allies” at Google Brain, the company’s division devoted to fundamental A.I. research, that drew the ire of senior managers.
Gebru is well-known among A.I. researchers for helping to promote diversity and inclusion within the field. She cofounded the Fairness, Accountability, and Transparency (FAccT) conference, which is dedicated to issues around A.I. bias, safety, and ethics. She also cofounded the group Black in AI, which highlights the work of Black machine learning experts as well as offering mentorship. The group has sought to raise awareness of bias and discrimination against Black computer scientists and engineers.
The researcher told Bloomberg News on Thursday that her firing came after a week in which she had wrangled with her managers over a research paper she had co-written with six other authors, including four from Google, that was about to be submitted for consideration at an academic conference next year. She said Google had asked her to either retract the paper or at least remove the her name and those of the other Google employees, she told Bloomberg.
She also posted an email to the internal employee group complaining about her treatment and accusing Google of being disingenuous in its commitment to racial and gender diversity, equity and inclusivity.
Gebru told the news service that she had told Megan Kacholia, Google Research’s vice president and one of her supervisors, that without more discussion about the way the review process for the paper had been handled and clear guarantees that the same thing wouldn’t happen again.
“We are a team called Ethical AI, of course we are going to be writing about problems in AI,” Gebru said.
She told Bloomberg that she had told Kacholia that if the company was unwilling to address her concerns, she would resign and leave following a transition period. The company then told her it would not agree to her conditions and that it was accepting her resignation effective immediately. It said Gebru’s decision to email the internal listserv reflected “behavior that is inconsistent with the expectations of a Google manager,” according to a Tweet Gebru posted.
Fellow A.I. researchers took to Twitter to express support for Gebru and outrage at her apparent firing. “Google’s retaliation against Timnit—one of the brightest and most principled AI justice researchers in the field—is *alarming*,” Meredith Whittaker, faculty director at the AI Now Institute at New York University, wrote on Twitter.
“Speaking out against censorship is now ‘inconsistent with the expectations of a Google manager.’ She did that because she cares more and will risk everything to protect those she has hired to work under her—a team that happens to be more diverse than any other at Google,” Deb Raji, another researcher who specializes in A.I. fairness, ethics, and accountability and who works at Mozilla, wrote in a Twitter post.
Many noted that Gebru’s departure came on the same day the National Labor Relations Board accused Google of illegally dismissing workers who helped organize two companywide protests: one, in 2019, against the company’s work with the U.S. Customs and Border Protection agency and a 2018 walkout to demonstrate against the company’s handling of sexual harassment cases.
The NLRB accused Google of using “terminations and intimidation in order to quell workplace activism.”
Google has not issued a public comment on the NLRB case. In reference to questions about Gebru’s case, it referred Fortune to an email from Jeff Dean, Google’s senior vice president for research, that was obtained by the technology news site Platformer.
In that email, Dean said Gebru and her co-authors had submitted the paper for internal review within the two week period the company requires and that an internal “cross-functional team” of reviewers had found “it didn’t meet our bar for publication,” and “ignored too much relevant research” that showed some of the ethical issues she and her co-authors were raising had at least been partially mitigated.
Dean said Gebru’s departure was “a difficult moment, especially given the important research topics she was involved in, and how deeply we care about responsible AI research as an org and as a company.”
The incident is likely to renew concerns both inside and outside the company about the ethics of its technology and how it deals with employee dissent. Once known for its freewheeling and liberal corporate culture, Google has increasingly sought to limit employee speech, particularly when it touches on issues likely to embarrass the company or potentially impact its ability to secure lucrative work for various government agencies.
Platformer also obtained and published what it said was the email Gebru had sent to colleagues. In it, she criticizes the company’s commitment to diversity, saying that “this org seems to have hired only 14% or so women this year.” (She does not make it clear if that figure is for all of Google Research or some other entity.) She also accuses the company of paying lip service to diversity and inclusion efforts and advises those who want the company to change to seek ways to bring external pressure to bear on Google.
Gebru says in the email that she had informed Google’s public relations and policy team of her intent to write the paper at least two months before the submission deadline and that she had already circulated it to more than 30 other researchers for feedback.
Gebru implied in several tweets that she had raised ethical concerns about some of the company’s A.I. software, including its large language models. This kind of A.I. software is responsible for many breakthroughs in natural language processing, including Google’s improved translation and search results, but has been shown to incorporate gender and racial bias from the large amounts of Internet pages and books that are used to train it.
In tweets yesterday, she singled out Dean, a storied figure among many computer engineers and researchers as one of the original coders of Google’s search engine, and implied that she had been planning to look at bias in Google’s large language models. “@JeffDean I realize how much large language models are worth to you now. I wouldn’t want to see what happens to the next person who attempts this,” she wrote.
Earlier in the week, Gebru had also implied Google managers were attempting to censor her work or bury her concerns about ethical issues in the company’s A.I. systems. “Is there anyone working on regulation protecting Ethical AI researchers, similar to whistleblower protection? Because with the amount of censorship & intimidation that goes on towards people in specific groups, how does anyone trust any real research in this area can take place?” she wrote in a Twitter post on Dec. 1.
This story has been updated to incorporate comments Gebru made to Bloomberg News and the publication of Jeff Dean’s email to Google staff.
More must-read tech coverage from Fortune:
- China’s “Amazon of services” says it welcomes Beijing’s stricter oversight
- Meet Rumble, the YouTube rival that’s popular with conservatives
- How a company best known for playing games used A.I. to solve one of biology’s greatest mysteries
- How water-resistant is the iPhone? Italian watchdog says Apple’s claims are overblown
- In a major scientific breakthrough, A.I. predicts the exact shape of proteins