Backlash against Google grows over firing of prominent Black A.I. researcher
Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.
Google faced mounting criticism Friday over its treatment of a prominent Black A.I. ethics researcher who says she was fired after questioning the company’s commitment to diversity in an email to colleagues.
By midmorning Pacific Time on Friday, more than 1,400 people, including nearly 600 who work at Google, had signed an open letter expressing solidarity with the researcher, Timnit Gebru. The letter also called on Google to strengthen its commitment to research integrity and support ethics research in line with the company’s own A.I. principles.
Gebru, who was co-lead of Google’s A.I. ethics team, is well known among machine learning researchers for her exploration of the racial and gender bias inherent in many A.I. systems. She was also one of Google AI Research’s most prominent Black employees and is a cofounder of Black in AI, an organization that has sought to highlight the work of Black machine learning researchers and provide mentorship and support to Black scholars in the field.
She says she was fired after writing a message to members of a “women and allies” email list within Google AI Research in which she accused the company of paying lip service to diversity and inclusion.
She said her managers informed her that the message was “inconsistent with the duties of a manager” at the company and immediately terminated her employment.
Jeff Dean, the senior vice president in charge of Google’s A.I. research, said in an email to staff that Gebru had offered to resign and that the company had accepted her resignation.
Google declined to comment on the open letter.
Both Gebru and the company say that her departure followed an incident in which the company asked Gebru to withdraw a research paper she and several other Google researchers had coauthored and submitted to an A.I. conference. In it, Gebru and her fellow researchers highlighted the racial bias inherent in a kind of cutting-edge natural language processing A.I., known as large language models, that Google and several other companies have developed. Google now uses one of these large language models to power parts of its search results.
In his email to staff, Dean says that the paper “didn’t meet our bar for publication” because “it ignored too much relevant research,” including information on how some of the issues Gebru and her coauthors were raising could be mitigated.
Gebru has said that the process used to review the paper was opaque and irregular. The open letter calls on Dean and others involved in the decision to reject the paper to meet with the Google A.I. ethics team to explain the process they used. It also calls on the company to explain to the broader public what took place. And it demands that Google Research “make an unequivocal commitment to research integrity and academic freedom.”
Google has said that it is “committed to making diversity, equity, and inclusion part of everything we do,” but the company has for years struggled to make good on that stance. According to Google’s 2020 diversity report, Black employees make up just 3.4% of its U.S. staff, up from 2.4% in 2014. The company also noted that its attrition rate among Black women, who make up 1.6% of its U.S. workforce, had increased in the past year.
Gebru’s departure is almost certain to be a significant blow to the company’s effort to reverse this trend. Many, including Gebru, pointed out on Twitter that Google had in the past often held her and her work up as examples of the strides the company was making in addressing diversity and inclusion issues. Gebru said in her email to colleagues that her treatment had convinced her that “there is zero accountability” at Google for actually improving its culture.
Many machine learning researchers posted messages on Twitter saying that Gebru’s exit dealt a serious blow to Google’s reputation on A.I. ethics and diversity issues.
Joy Buolamwini, a computer scientist who founded the Algorithmic Justice League, a group dedicated to fighting algorithmic bias, and who had coauthored an influential paper with Gebru on racial bias in facial recognition systems, said Gebru’s firing “severely undermines Google’s credibility for supporting rigorous research on AI ethics and algorithmic auditing.”
More must-read tech coverage from Fortune:
- Robinhood’s next adventure: Stealing market share from the rich
- Why the power to change the female-founder double standard rests with VCs
- Quantum computing is entering a new dimension
- How Chinese phonemaker Xiaomi conquered India—and outperformed Apple
- Google ethics researcher’s departure renews worries the company is silencing whistleblowers