Ousted Google ethics executive sees A.I. as a ‘gold rush’ where ‘the people making money are not the ones in the midst of it’

Eleanor PringleBy Eleanor PringleReporter

Eleanor Pringle is an award-winning reporter at Fortune covering news, the economy, and personal finance. Eleanor previously worked as a business correspondent and news editor in regional news in the U.K. She completed her journalism training with the Press Association after earning a degree from the University of East Anglia.

Timnit Gebru
Timnit Gebru, formerly a Google staffer and now an A.I. research lab founder, has said the technology is like a modern day “gold rush.”
Kimberly White - Getty Images for TechCrunch

Among the many voices clamoring for urgent regulation of artificial intelligence is Timnit Gebru.

Gebru has all the hallmarks of a Big Tech star: a master’s and Ph.D. from Stanford, engineering and research roles at Apple and Microsoft, before joining Google as an A.I. expert.

But in 2020 her time co-leading the ethical A.I. team at the Alphabet-owned company came to an end, a decision triggered by a paper she wrote warning of the bias being embedded into artificial intelligence.

Bias is a topic that experts in the field have raised for many years.

In 2015 Google apologized and said it was “appalled” by its photos app—powered by A.I.—labeling a photograph of a black couple as “gorillas.”

Warnings about A.I. bias are now becoming higher profile. Earlier this year the World Health Organization said that although it welcomed improved access to health information, the datasets used to train such models may have biases already built in.

Such cautions are the reason the public needs to remember it has “agency” over what happens with artificial intelligence, argued Gebru.

In an interview with the Guardian, the 40-year-old said, “It feels like a gold rush. In fact, it is a gold rush.

“And a lot of the people who are making money are not the people actually in the midst of it. But it’s humans who decide whether all this should be done or not. We should remember that we have the agency to do that.”

Gebru also pushed for clarification on what regulation would entail, after thousands of tech bosses—including Tesla’s Elon Musk, Apple cofounder Steve Wozniak, and OpenAI’s Sam Altman—said some guardrails need to be put on the industry.

But leaving it to tech bosses to regulate themselves wouldn’t work, Gebru continued: “Unless there is external pressure to do something different, companies are not just going to self-regulate. We need regulation and we need something better than just a profit motive.”

It’s humans—not robots

The founder and director of the Distributed AI Research Institute (DAIR)—an independent A.I. research unit—also had a powerful reminder about the hypothetical threat the technology is posing to humanity.

Fears range from a Terminator-like apocalypse—if you ask Musk—to the technology being used as a weapon of war, with others suggesting that the technology already thinks of mankind as “scum.”

Gebru isn’t sold.

“A.I. is not magic,” she said. “There are a lot of people involved—humans.”

She said the theory that services like large language models could one day think for themselves “ascribes agency to a tool rather than the humans building the tool.”

“That means you can aggregate responsibility: ‘It’s not me that’s the problem. It’s the tool. It’s super-powerful. We don’t know what it’s going to do.’ Well, no—it’s you that’s the problem,” Gebru continued.

“You’re building something with certain characteristics for your profit. That’s extremely distracting and takes the attention away from real harms and things we need to do. Right now.”

Gebru remained optimistic, however: “Maybe, if enough people do small things and get organized, things will change. That’s my hope.”

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.