• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechAI

OpenAI’s former top safety researcher says there’s a ‘10 to 20% chance’ that the tech will take over with many or most ‘humans dead’

By
Tristan Bove
Tristan Bove
Contributing Reporter
Down Arrow Button Icon
By
Tristan Bove
Tristan Bove
Contributing Reporter
Down Arrow Button Icon
May 3, 2023, 2:16 PM ET
Could ChatGPT and other A.I. models threaten humanity?
Could ChatGPT and other A.I. models threaten humanity?CFOTO/Future Publishing via Getty Images

The rapid rise of new A.I. models in recent months, like OpenAI’s ChatGPT, has led some technologists and researchers to ponder whether artificial intelligence could soon surpass human capabilities. One key former researcher at OpenAI says that such a future is a distinct possibility, but also warns there is a non-zero chance that human- or superhuman-level A.I. could take control of humanity and even annihilate it.

Recommended Video

A “full-blown A.I. takeover scenario” is top of mind for Paul Christiano, former head of language model alignment on OpenAI’s safety team, who warned in an interview last week with the tech-focused Bankless podcast that there is a very decent chance advanced A.I. could spell potentially world-ending calamity in the near future.

“Overall, maybe you’re getting more up to a 50-50 chance of doom shortly after you have A.I. systems that are human-level,” Christiano said. “I think maybe there’s a 10 to 20% chance of A.I. takeover [with] many, most humans dead.”

Christiano left OpenAI in 2021, explaining his departure during an Ask Me Anything session on LessWrong, a community blog site created by Eliezer Yudkowsky, a fellow A.I. researcher who has warned for years that superhuman A.I. could destroy humanity. Christiano wrote at the time that he wanted to “work on more conceptual/theoretical issues in alignment,” a subfield of A.I. safety research that focuses on ensuring A.I. systems are aligned with human interests and ethical principles, adding that OpenAI “isn’t the best” for that type of research.

Christiano now runs the Alignment Research Center, a nonprofit group working on theoretical A.I. alignment strategies, a field that has gained considerable interest in recent months as companies race to roll out increasingly sophisticated A.I. models. In March, OpenAI released GPT-4, an update to the A.I. model that powers ChatGPT, which only launched to the public in November. Meanwhile, tech behemoths including Google and Microsoft have kicked off an A.I. arms race to stake a claim in the burgeoning market, launching their own versions of A.I. models with commercial applications.

But with publicly available A.I. systems still riddled with errors and misinformation, Christiano and a host of other experts have cautioned against moving too fast. Elon Musk, an OpenAI cofounder who cut ties with the company in 2018, was one of 1,100 technologists who signed an open letter in March calling for a six-month pause on development for advanced A.I. models more powerful than GPT-4, and a refocus on research on improving existing systems’ reliability. (Musk has since announced to be starting a competitor to ChatGPT called TruthGPT, which he says will focus on “truth-seeking” instead of profit.)

One of the letter’s concerns was that existing A.I. models could be paving the way for superintelligent models that pose a threat to civilization. While current generative A.I. systems like ChatGPT can capably handle specific tasks, they are still far from reaching human intelligence levels, a hypothetical future iteration of A.I. known as artificial general intelligence (AGI).

Experts have been divided on the timeline of AGI’s development, with some arguing it could take decades and others saying it may never be possible, but the rapid pace of A.I. advancement is starting to turn heads. Around 57% of A.I. and computer science researchers said A.I. research is quickly moving towards AGI in a Stanford University survey published in April, while 36% said entrusting advanced versions of A.I. with important decisions could lead to “nuclear-level catastrophe” for humanity.

Other experts have warned that even if more powerful versions of A.I. are developed to be neutral, they could quickly become dangerous if employed by ill-intentioned humans. “It is hard to see how you can prevent the bad actors from using it for bad things,” Geoffrey Hinton, a former Google researcher who is often called the godfather of A.I., told the New York Times this week. “I don’t think they should scale this up more until they have understood whether they can control it.” 

Christiano said in his interview that civilization could be at risk if A.I. develops to the point that society can no longer function without it, leaving humanity vulnerable if a powerful A.I. decides it no longer needs to act in its creators’ interest. 

“The most likely way we die involves—like, not A.I. comes out of the blue and kills everyone—but involves we have deployed a lot of A.I. everywhere,” he said. “If for some reason, God forbid, all these A.I. systems were trying to kill us, they would definitely kill us.”

Other voices have pushed back against these interpretations of A.I., however. Some experts have argued that while A.I. designed to accomplish specific tasks is inevitable, developing AGI that can match human intelligence might never become technically feasible due to computers’ limitations when it comes to interpreting life experiences. 

Responding to recent dire warnings over A.I., entrepreneur and computer scientist Perry Metzger argued in a tweet last month that while “deeply superhuman” A.I. is a likelihood, it will likely be years or decades before AGI evolves to the point of being capable of revolting against its creators, who will likely have time to steer A.I. in the right direction. Responding to Metzger’s tweet, Yann LeCun, an NYU computer scientist who has directed A.I. research at Meta since 2013, wrote that the fatalistic scenario of AGI developing dangerous and uncontrollable abilities overnight is “utterly impossible.”

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Author
By Tristan BoveContributing Reporter
LinkedIn iconTwitter icon
See full bioRight Arrow Button Icon

Latest in Tech

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Fortune Secondary Logo
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Fortune Crypto
  • Features
  • Leadership
  • Health
  • Commentary
  • Success
  • Retail
  • Mpw
  • Tech
  • Lifestyle
  • CEO Initiative
  • Asia
  • Politics
  • Conferences
  • Europe
  • Newsletters
  • Personal Finance
  • Environment
  • Magazine
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
Fortune Secondary Logo
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Latest in Tech

sam altman
AIOpenAI
Sam Altman tells staff at an all-hands that OpenAI is negotiating a deal with the Pentagon, after Trump orders the end of Anthropic contracts
By Sharon GoldmanFebruary 27, 2026
6 hours ago
Future of Workthe future of work
Have good taste? It may just get you a job during the AI jobs apocalypse, says Sam Altman
By Marco Quiroz-GutierrezFebruary 27, 2026
6 hours ago
CybersecurityMeta
Trump’s FTC backs off social media regulation despite finding that nearly 20% of America’s children are online for 4 hours or more
By Catherina GioinoFebruary 27, 2026
7 hours ago
Emil Michael smirks
AIAnthropic
Emil Michael, the Silicon Valley exec turned Trump official leading the war against Anthropic, has deep ties to the tech world
By Lily Mae LazarusFebruary 27, 2026
7 hours ago
AIMilitary
Trump orders U.S. government to stop using Anthropic but gives Pentagon six months to phase it out while Hegseth adds supply-chain risk designation
By Jason MaFebruary 27, 2026
7 hours ago
Arts & EntertainmentHollywood
The battle over WBD left three big winners on Wall Street—while the thousands who lost out will remain behind the scenes
By Geoff ColvinFebruary 27, 2026
7 hours ago

Most Popular

placeholder alt text
Innovation
An MIT roboticist who cofounded bankrupt robot vacuum maker iRobot says Elon Musk’s vision of humanoid robot assistants is ‘pure fantasy thinking’
By Marco Quiroz-GutierrezFebruary 25, 2026
2 days ago
placeholder alt text
Commentary
'The Pitt': a masterclass display of DEI in action 
By Robert RabenFebruary 26, 2026
2 days ago
placeholder alt text
Success
Jeff Bezos says being lazy, not working hard, is the root of anxiety: ‘The stress goes away the second I take that first step’
By Sydney LakeFebruary 25, 2026
3 days ago
placeholder alt text
Economy
It’s more than George Clooney moving to France: America is becoming the ‘uncool’ country that people want to move away from
By Nick LichtenbergFebruary 27, 2026
21 hours ago
placeholder alt text
Success
Gen Z Olympic champion Eileen Gu says she rewires her brain daily to be more successful—and multimillionaire founder Arianna Huffington says it really does work
By Orianna Rosa RoyleFebruary 25, 2026
3 days ago
placeholder alt text
Law
China's government intervenes to show Michigan scientists were carrying worms, not biological materials
By Ed White and The Associated PressFebruary 26, 2026
1 day ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.