• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
AIOpenAI

OpenAI warns its future models will have a higher risk of aiding bioweapons development

By
Beatrice Nolan
Beatrice Nolan
Tech Reporter
Down Arrow Button Icon
By
Beatrice Nolan
Beatrice Nolan
Tech Reporter
Down Arrow Button Icon
June 19, 2025, 7:35 AM ET
Johannes Heidecke (R), OpenAI's head of safety systems talks with Reinhard Heckel (L), professor of machine learning at the Department of Computer Engineering at TUM, and OpenAI CEO Sam Altman, in a panel discussion at the Technical University of Munich (TUM) in May 2023.
Johannes Heidecke (R), OpenAI's head of safety systems talks with Reinhard Heckel (L), professor of machine learning at the Department of Computer Engineering at TUM, and OpenAI CEO Sam Altman, in a panel discussion at the Technical University of Munich (TUM) in May 2023. Sven Hoppe—picture alliance via Getty Images
  • OpenAI says its next generation of AI models could significantly increase the risk of biological weapon development, even enabling individuals with no scientific background to create dangerous agents. The company is boosting its safety testing as it anticipates some models will reach its highest risk tier.

OpenAI is warning that its next generation of advanced AI models could pose a significantly higher risk of biological weapon development, especially when used by individuals with little to no scientific expertise.

Recommended Video

OpenAI executives told Axios they anticipate upcoming models will soon trigger the high-risk classification under the company’s preparedness framework, a system designed to evaluate and mitigate the risks posed by increasingly powerful AI models.  

OpenAI’s head of safety systems, Johannes Heidecke, told the outlet that the company is “expecting some of the successors of our o3 (reasoning model) to hit that level.”

In a blog post, the company said it was increasing its safety testing to mitigate the risk that models will help users in the creation of biological weapons. OpenAI is concerned that without these mitigations models will soon be capable of “novice uplift,” allowing those with limited scientific knowledge to create dangerous weapons.

“We’re not yet in the world where there’s like novel, completely unknown creation of bio threats that have not existed before,” Heidecke said. “We are more worried about replicating things that experts already are very familiar with.”

Part of the reason why it’s difficult is that the same capabilities that could unlock life-saving medical breakthroughs could also be used by bad actors for dangerous ends. According to Heidecke, this is why leading AI labs need highly accurate testing systems in place.

One of the challenges is that some of the same capabilities that could allow AI to help discover new medical breakthroughs can also be used for harm.

“This is not something where like 99% or even one in 100,000 performance is … sufficient,” he said. “We basically need, like, near perfection.”

Representatives for OpenAI did not immediately respond to a request for comment from Fortune, made outside normal working hours.

Model misuse

OpenAI is not the only company concerned about the misuse of its models when it comes to weapon development. As models get more advanced their potential for misuse and risk generally grows.

Anthropic recently launched its most advanced model, Claude Opus 4, with stricter safety protocols than any of its previous models, categorizing it an AI Safety Level 3 (ASL-3), under the company’s Responsible Scaling Policy. Previous Anthropic models have all been classified AI Safety Level 2 (ASL-2) under the company’s framework, which is loosely modeled after the U.S. government’s biosafety level (BSL) system.

Models that are categorized in this third safety level meet more dangerous capability thresholds and are powerful enough to pose significant risks, such as aiding in the development of weapons or automating AI R&D. Anthropic’s most advanced model also made headlines after it opted to blackmail an engineer to avoid being shut down in a highly controlled test.

Early versions of Anthropic’s Claude 4 were found to comply with dangerous instructions, for example, helping to plan terrorist attacks, if prompted. However, the company said this issue was largely mitigated after a dataset that was accidentally omitted during training was restored.

Fortune Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Fortune Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Author
By Beatrice NolanTech Reporter
Twitter icon

Beatrice Nolan is a tech reporter on Fortune’s AI team, covering artificial intelligence and emerging technologies and their impact on work, industry, and culture. She's based in Fortune's London office and holds a bachelor’s degree in English from the University of York. You can reach her securely via Signal at beatricenolan.08

See full bioRight Arrow Button Icon

Latest in AI

InnovationVenture Capital
This Khosla Ventures–backed startup is using AI to personalize cancer care
By Allie GarfinkleDecember 4, 2025
5 hours ago
AIEye on AI
Companies are increasingly falling victim to AI impersonation scams. This startup just raised $28M to stop deepfakes in real time
By Sharon GoldmanDecember 4, 2025
6 hours ago
Ted Pick
BankingData centers
Morgan Stanley considers offloading some of its data-center exposure
By Esteban Duarte, Paula Seligson, Davide Scigliuzzo and BloombergDecember 4, 2025
6 hours ago
Zuckerberg
EnergyMeta
Meta’s Zuckerberg plans deep cuts for Metaverse efforts
By Kurt Wagner and BloombergDecember 4, 2025
6 hours ago
Pichai
Big TechAlphabet
Alphabet’s AI chips are a potential $900 billion ‘secret sauce’
By Ryan Vlastelica and BloombergDecember 4, 2025
6 hours ago
Geoffrey Hinton gestures with his hands up
Successthe future of work
‘Godfather of AI’ says Bill Gates and Elon Musk are right about the future of work—but he predicts mass unemployment is on its way
By Preston ForeDecember 4, 2025
7 hours ago

Most Popular

placeholder alt text
Economy
Two months into the new fiscal year and the U.S. government is already spending more than $10 billion a week servicing national debt
By Eleanor PringleDecember 4, 2025
11 hours ago
placeholder alt text
North America
Jeff Bezos and Lauren Sánchez Bezos commit $102.5 million to organizations combating homelessness across the U.S.: ‘This is just the beginning’
By Sydney LakeDecember 2, 2025
2 days ago
placeholder alt text
Success
‘Godfather of AI’ says Bill Gates and Elon Musk are right about the future of work—but he predicts mass unemployment is on its way
By Preston ForeDecember 4, 2025
7 hours ago
placeholder alt text
Economy
Ford workers told their CEO 'none of the young people want to work here.' So Jim Farley took a page out of the founder's playbook
By Sasha RogelbergNovember 28, 2025
6 days ago
placeholder alt text
North America
Anonymous $50 million donation helps cover the next 50 years of tuition for medical lab science students at University of Washington
By The Associated PressDecember 2, 2025
2 days ago
placeholder alt text
AI
IBM CEO warns there’s ‘no way’ hyperscalers like Google and Amazon will be able to turn a profit at the rate of their data center spending
By Marco Quiroz-GutierrezDecember 3, 2025
1 day ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.