• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechAI

A.I. could lead to a ‘nuclear-level catastrophe’ according to a third of researchers, a new Stanford report finds

Tristan Bove
By
Tristan Bove
Tristan Bove
Down Arrow Button Icon
Tristan Bove
By
Tristan Bove
Tristan Bove
Down Arrow Button Icon
April 10, 2023, 1:18 PM ET
Sam Altman of OpenAI, the company that created ChatGPT.
Sam Altman of OpenAI, the company that created ChatGPT.Jason Redmond—AFP/Getty Images

It was a blockbuster 2022 for artificial intelligence. The technology made waves from Google’s DeepMindpredicting the structure of almost every known protein in the human body to successful launches of OpenAI’s generative A.I. assistant tools DALL-E and ChatGPT. The sector now looks to be on a fast track toward revolutionizing our economy and everyday lives, but many experts remain concerned that changes are happening too fast, with potentially disastrous implications for the world.

Many experts in A.I. and computer science say the technology is likely a watershed moment for human society. But 36% don’t mean that as a positive, warning that decisions made by A.I. could lead to “nuclear-level catastrophe,” according to researchers surveyed in an annual report on the technology by Stanford University’s Institute for Human-Centered A.I., published earlier this month.

Almost three quarters of researchers in natural language processing—the branch of computer science concerned with developing A.I.—say the technology might soon spark “revolutionary societal change,” according to the report. And while an overwhelming majority of researchers say the future net impact of A.I. and natural language processing will be positive, concerns remain that the technology could soon develop potentially dangerous capabilities, while A.I.’s traditional gatekeepers are no longer as powerful as they once were.

“As the technical barrier to entry for creating and deploying generative A.I. systems has lowered dramatically, the ethical issues around A.I. have become more apparent to the general public. Startups and large companies find themselves in a race to deploy and release generative models, and the technology is no longer controlled by a small group of actors,” the report said.

A.I. fears over the past few months have mostly been contained to the technology’s disruptive implications for society. Companies including Google and Microsoft are locked in an arms race over generative A.I., systems trained on troves of data that can generate text and images based on simple prompts. But as OpenAI’s ChatGPT has already proven, these technologies can quickly wipe out livelihoods. If generative A.I. lives up to its potential, up to 300 million jobs could be at risk in the U.S. and Europe, according to a Goldman Sachs research note last month, with legal and administrative professions the most exposed.

Goldman researchers noted that A.I.’s labor market disruption could be undone in the long run by new job creation and improved productivity, but generative A.I. has also sparked fears over the technology’s tendency to be inaccurate. Both Microsoft and Google’s A.I. offerings have frequently made untrue or misleading statements, with one recent study finding that Google’s Bard chatbot can create false narratives in nearly eight out of 10 topics. A.I.’s imprecision, in addition to a tendency for disturbing conversations when it’s used for too long, has pushed developers and experts to warn that the technology should not be used to make major decisions just yet.

But the fast pace of A.I. development means that companies and individuals who don’t take risks with it could be left behind, and the technology could soon advance so much that we may not have a choice.

At its current developmental speed, research is moving on from generative A.I. to creating artificial general intelligence, according to 57% of researchers surveyed by Stanford. Artificial general intelligence, or AGI, is an A.I. system that can accurately mimic or even outperform the capabilities of a human brain. There is very little consensus over when AGI could happen, with different experts claiming it will take 50 years or hundreds, while some researchers even question if true AGI is possible at all. 

But if AGI does become reality, it would likely represent a seminal moment of human history and development, with some even fearing it could represent a technological singularity, a hypothetical future moment when humans lose control of technological growth and creations gain above-human intelligence. Around 58% of the Stanford researchers surveyed called AGI an “important concern.”

The survey found that experts’ most pressing concerns is that current A.I. research is focusing too much on scaling, hitting goals, and failing to include insights from different research fields. Other experts have raised similar concerns, calling for major developers to slow down the pace of A.I. rollout as ethics research continues. Elon Musk and Steve Wozniak were among the 1,300 signatories of an open letter last month calling for a six-month ban on creating more powerful versions of A.I. as research continues into the technology’s larger implications.

Fortune Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Fortune Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Author
Tristan Bove
By Tristan Bove
LinkedIn iconTwitter icon
See full bioRight Arrow Button Icon

Latest in Tech

United Nations
AIUnited Nations
UN warns about AI becoming another ‘Great Divergence’ between rich and poor countries like the Industrial Revolution
By Elaine Kurtenbach and The Associated PressDecember 2, 2025
51 minutes ago
Anthropic cofounder and CEO Dario Amodei
AIEye on AI
How Anthropic’s safety first approach won over big business—and how its own engineers are using its Claude AI
By Jeremy KahnDecember 2, 2025
60 minutes ago
Nvidia founder and CEO Jensen Huang reacts during a press conference at the Asia-Pacific Economic Cooperation (APEC) CEO Summit in Gyeongju on October 31, 2025.
AINvidia
Nvidia CFO admits the $100 billion OpenAI megadeal ‘still’ isn’t signed—two months after it helped fuel an AI rally
By Eva RoytburgDecember 2, 2025
3 hours ago
Big TechInstagram
Instagram CEO calls staff back to the office 5 days a week to build a ‘winning culture’—while canceling every recurring meeting
By Marco Quiroz-GutierrezDecember 2, 2025
3 hours ago
Elon Musk, standing with his arms crossed, looks down at Donald Trump sitting at his desk in the Oval Office
EconomyTariffs and trade
Elon Musk says he warned Trump against tariffs, which U.S. manufacturers blame for a turn to more offshoring and diminishing American factory jobs
By Sasha RogelbergDecember 2, 2025
3 hours ago
layoffs
EconomyLayoffs
What CEOs say about AI and what they mean about layoffs and job cuts: Goldman Sachs peels the onion
By Nick LichtenbergDecember 2, 2025
3 hours ago

Most Popular

placeholder alt text
Economy
Ford workers told their CEO 'none of the young people want to work here.' So Jim Farley took a page out of the founder's playbook
By Sasha RogelbergNovember 28, 2025
4 days ago
placeholder alt text
Success
Warren Buffett used to give his family $10,000 each at Christmas—but when he saw how fast they were spending it, he started buying them shares instead
By Eleanor PringleDecember 2, 2025
10 hours ago
placeholder alt text
Success
Forget the four-day workweek, Elon Musk predicts you won't have to work at all in ‘less than 20 years'
By Jessica CoacciDecember 1, 2025
1 day ago
placeholder alt text
Innovation
Google CEO Sundar Pichai says we’re just a decade away from a new normal of extraterrestrial data centers
By Sasha RogelbergDecember 1, 2025
1 day ago
placeholder alt text
Personal Finance
Current price of gold as of December 1, 2025
By Danny BakstDecember 1, 2025
1 day ago
placeholder alt text
Big Tech
Elon Musk, fresh off securing a $1 trillion pay package, says philanthropy is 'very hard'
By Sydney LakeDecember 1, 2025
1 day ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.