• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechAI

OpenAI’s former top safety researcher says there’s a ‘10 to 20% chance’ that the tech will take over with many or most ‘humans dead’

By
Tristan Bove
Tristan Bove
Contributing Reporter
Down Arrow Button Icon
By
Tristan Bove
Tristan Bove
Contributing Reporter
Down Arrow Button Icon
May 3, 2023, 2:16 PM ET
Could ChatGPT and other A.I. models threaten humanity?
Could ChatGPT and other A.I. models threaten humanity?CFOTO/Future Publishing via Getty Images

The rapid rise of new A.I. models in recent months, like OpenAI’s ChatGPT, has led some technologists and researchers to ponder whether artificial intelligence could soon surpass human capabilities. One key former researcher at OpenAI says that such a future is a distinct possibility, but also warns there is a non-zero chance that human- or superhuman-level A.I. could take control of humanity and even annihilate it.

Recommended Video

A “full-blown A.I. takeover scenario” is top of mind for Paul Christiano, former head of language model alignment on OpenAI’s safety team, who warned in an interview last week with the tech-focused Bankless podcast that there is a very decent chance advanced A.I. could spell potentially world-ending calamity in the near future.

“Overall, maybe you’re getting more up to a 50-50 chance of doom shortly after you have A.I. systems that are human-level,” Christiano said. “I think maybe there’s a 10 to 20% chance of A.I. takeover [with] many, most humans dead.”

Christiano left OpenAI in 2021, explaining his departure during an Ask Me Anything session on LessWrong, a community blog site created by Eliezer Yudkowsky, a fellow A.I. researcher who has warned for years that superhuman A.I. could destroy humanity. Christiano wrote at the time that he wanted to “work on more conceptual/theoretical issues in alignment,” a subfield of A.I. safety research that focuses on ensuring A.I. systems are aligned with human interests and ethical principles, adding that OpenAI “isn’t the best” for that type of research.

Christiano now runs the Alignment Research Center, a nonprofit group working on theoretical A.I. alignment strategies, a field that has gained considerable interest in recent months as companies race to roll out increasingly sophisticated A.I. models. In March, OpenAI released GPT-4, an update to the A.I. model that powers ChatGPT, which only launched to the public in November. Meanwhile, tech behemoths including Google and Microsoft have kicked off an A.I. arms race to stake a claim in the burgeoning market, launching their own versions of A.I. models with commercial applications.

But with publicly available A.I. systems still riddled with errors and misinformation, Christiano and a host of other experts have cautioned against moving too fast. Elon Musk, an OpenAI cofounder who cut ties with the company in 2018, was one of 1,100 technologists who signed an open letter in March calling for a six-month pause on development for advanced A.I. models more powerful than GPT-4, and a refocus on research on improving existing systems’ reliability. (Musk has since announced to be starting a competitor to ChatGPT called TruthGPT, which he says will focus on “truth-seeking” instead of profit.)

One of the letter’s concerns was that existing A.I. models could be paving the way for superintelligent models that pose a threat to civilization. While current generative A.I. systems like ChatGPT can capably handle specific tasks, they are still far from reaching human intelligence levels, a hypothetical future iteration of A.I. known as artificial general intelligence (AGI).

Experts have been divided on the timeline of AGI’s development, with some arguing it could take decades and others saying it may never be possible, but the rapid pace of A.I. advancement is starting to turn heads. Around 57% of A.I. and computer science researchers said A.I. research is quickly moving towards AGI in a Stanford University survey published in April, while 36% said entrusting advanced versions of A.I. with important decisions could lead to “nuclear-level catastrophe” for humanity.

Other experts have warned that even if more powerful versions of A.I. are developed to be neutral, they could quickly become dangerous if employed by ill-intentioned humans. “It is hard to see how you can prevent the bad actors from using it for bad things,” Geoffrey Hinton, a former Google researcher who is often called the godfather of A.I., told the New York Times this week. “I don’t think they should scale this up more until they have understood whether they can control it.” 

Christiano said in his interview that civilization could be at risk if A.I. develops to the point that society can no longer function without it, leaving humanity vulnerable if a powerful A.I. decides it no longer needs to act in its creators’ interest. 

“The most likely way we die involves—like, not A.I. comes out of the blue and kills everyone—but involves we have deployed a lot of A.I. everywhere,” he said. “If for some reason, God forbid, all these A.I. systems were trying to kill us, they would definitely kill us.”

Other voices have pushed back against these interpretations of A.I., however. Some experts have argued that while A.I. designed to accomplish specific tasks is inevitable, developing AGI that can match human intelligence might never become technically feasible due to computers’ limitations when it comes to interpreting life experiences. 

Responding to recent dire warnings over A.I., entrepreneur and computer scientist Perry Metzger argued in a tweet last month that while “deeply superhuman” A.I. is a likelihood, it will likely be years or decades before AGI evolves to the point of being capable of revolting against its creators, who will likely have time to steer A.I. in the right direction. Responding to Metzger’s tweet, Yann LeCun, an NYU computer scientist who has directed A.I. research at Meta since 2013, wrote that the fatalistic scenario of AGI developing dangerous and uncontrollable abilities overnight is “utterly impossible.”

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Author
By Tristan BoveContributing Reporter
LinkedIn iconTwitter icon
See full bioRight Arrow Button Icon

Latest in Tech

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Fortune Secondary Logo
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Fortune Crypto
  • Features
  • Leadership
  • Health
  • Commentary
  • Success
  • Retail
  • Mpw
  • Tech
  • Lifestyle
  • CEO Initiative
  • Asia
  • Politics
  • Conferences
  • Europe
  • Newsletters
  • Personal Finance
  • Environment
  • Magazine
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
  • Group Subscriptions
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
Fortune Secondary Logo
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Latest in Tech

A plume of smoke rises from the port of Jebel Ali following a reported Iranian strike in Dubai on March 1, 2026.
Middle EastData centers
Iran’s attacks on Amazon data centers in UAE, Bahrain signal a new kind of war as AI plays an increasingly strategic role, analysts say
By Jeremy KahnMarch 9, 2026
2 hours ago
Anthropic CEO Dario Amodei speaking into a microphone.
LawAnthropic
Anthropic sues the Pentagon after being labeled a threat to national security
By Beatrice NolanMarch 9, 2026
2 hours ago
InnovationEntrepreneurship
Billionaire Peter Diamandis offers $3.5 million to filmmakers who portray AI as the hero—not the villain
By Marco Quiroz-GutierrezMarch 9, 2026
2 hours ago
Business man on the phone with luggage
SuccessCareers
Worried about AI job cuts? It might be time to move to Europe, where companies are planning to hiring more—not less—workers thanks to AI
By Preston ForeMarch 9, 2026
3 hours ago
Women walk past a sign
AITech
People really hate AI but not as much as Iran—or Democrats
By Jake AngeloMarch 9, 2026
3 hours ago
stitch
Future of WorkSocial Media
‘It feels like a video game, but in real life’: Gen Z’s love of analog ‘grandma’ hobbies jump from Pokemon to bird-watching, scrolling to needlepoint
By Kaitlyn Huamani and The Associated PressMarch 9, 2026
6 hours ago

Most Popular

placeholder alt text
Success
Gen Z graduates who majored in ‘AI-proof’ careers like pharmacy, biology, and education are making less than $50,000 after graduation
By Emma BurleighMarch 6, 2026
3 days ago
placeholder alt text
Success
This AI founder who quit her 9-to-5 law job has a warning for anyone dreaming of doing the same: 'I'm working harder now than I ever did'
By Emma BurleighMarch 8, 2026
1 day ago
placeholder alt text
AI
Anthropic just mapped out which jobs AI could potentially replace. A 'Great Recession for white-collar workers' is absolutely possible
By Jake AngeloMarch 6, 2026
3 days ago
placeholder alt text
Economy
Trump’s $175 billion illegal tariff revenue is now accruing interest, and refund delays could be costing American taxpayers $700 million a month
By Sasha RogelbergMarch 4, 2026
5 days ago
placeholder alt text
Energy
Forget the U.S. Navy. The best protection for ships traveling through the Strait of Hormuz may be claiming to be a 'Chinese' or 'Muslim' vessel
By Jason MaMarch 7, 2026
2 days ago
placeholder alt text
Energy
'Nightmare scenario' looms as global markets head for the biggest oil output disruption in history, top energy guru warns
By Jason MaMarch 8, 2026
1 day ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.