• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
AIOpenAI

OpenAI warns its future models will have a higher risk of aiding bioweapons development

By
Beatrice Nolan
Beatrice Nolan
Tech Reporter
Down Arrow Button Icon
By
Beatrice Nolan
Beatrice Nolan
Tech Reporter
Down Arrow Button Icon
June 19, 2025, 7:35 AM ET
Johannes Heidecke (R), OpenAI's head of safety systems talks with Reinhard Heckel (L), professor of machine learning at the Department of Computer Engineering at TUM, and OpenAI CEO Sam Altman, in a panel discussion at the Technical University of Munich (TUM) in May 2023.
Johannes Heidecke (R), OpenAI's head of safety systems talks with Reinhard Heckel (L), professor of machine learning at the Department of Computer Engineering at TUM, and OpenAI CEO Sam Altman, in a panel discussion at the Technical University of Munich (TUM) in May 2023. Sven Hoppe—picture alliance via Getty Images
  • OpenAI says its next generation of AI models could significantly increase the risk of biological weapon development, even enabling individuals with no scientific background to create dangerous agents. The company is boosting its safety testing as it anticipates some models will reach its highest risk tier.

OpenAI is warning that its next generation of advanced AI models could pose a significantly higher risk of biological weapon development, especially when used by individuals with little to no scientific expertise.

Recommended Video

OpenAI executives told Axios they anticipate upcoming models will soon trigger the high-risk classification under the company’s preparedness framework, a system designed to evaluate and mitigate the risks posed by increasingly powerful AI models.  

OpenAI’s head of safety systems, Johannes Heidecke, told the outlet that the company is “expecting some of the successors of our o3 (reasoning model) to hit that level.”

In a blog post, the company said it was increasing its safety testing to mitigate the risk that models will help users in the creation of biological weapons. OpenAI is concerned that without these mitigations models will soon be capable of “novice uplift,” allowing those with limited scientific knowledge to create dangerous weapons.

“We’re not yet in the world where there’s like novel, completely unknown creation of bio threats that have not existed before,” Heidecke said. “We are more worried about replicating things that experts already are very familiar with.”

Part of the reason why it’s difficult is that the same capabilities that could unlock life-saving medical breakthroughs could also be used by bad actors for dangerous ends. According to Heidecke, this is why leading AI labs need highly accurate testing systems in place.

One of the challenges is that some of the same capabilities that could allow AI to help discover new medical breakthroughs can also be used for harm.

“This is not something where like 99% or even one in 100,000 performance is … sufficient,” he said. “We basically need, like, near perfection.”

Representatives for OpenAI did not immediately respond to a request for comment from Fortune, made outside normal working hours.

Model misuse

OpenAI is not the only company concerned about the misuse of its models when it comes to weapon development. As models get more advanced their potential for misuse and risk generally grows.

Anthropic recently launched its most advanced model, Claude Opus 4, with stricter safety protocols than any of its previous models, categorizing it an AI Safety Level 3 (ASL-3), under the company’s Responsible Scaling Policy. Previous Anthropic models have all been classified AI Safety Level 2 (ASL-2) under the company’s framework, which is loosely modeled after the U.S. government’s biosafety level (BSL) system.

Models that are categorized in this third safety level meet more dangerous capability thresholds and are powerful enough to pose significant risks, such as aiding in the development of weapons or automating AI R&D. Anthropic’s most advanced model also made headlines after it opted to blackmail an engineer to avoid being shut down in a highly controlled test.

Early versions of Anthropic’s Claude 4 were found to comply with dangerous instructions, for example, helping to plan terrorist attacks, if prompted. However, the company said this issue was largely mitigated after a dataset that was accidentally omitted during training was restored.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Author
By Beatrice NolanTech Reporter
Twitter icon

Beatrice Nolan is a tech reporter on Fortune’s AI team, covering artificial intelligence and emerging technologies and their impact on work, industry, and culture. She's based in Fortune's London office and holds a bachelor’s degree in English from the University of York. You can reach her securely via Signal at beatricenolan.08

See full bioRight Arrow Button Icon

Latest in AI

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Fortune Secondary Logo
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Fortune Crypto
  • Features
  • Leadership
  • Health
  • Commentary
  • Success
  • Retail
  • Mpw
  • Tech
  • Lifestyle
  • CEO Initiative
  • Asia
  • Politics
  • Conferences
  • Europe
  • Newsletters
  • Personal Finance
  • Environment
  • Magazine
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
Fortune Secondary Logo
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Latest in AI

PoliticsColleges and Universities
Pentagon chief blocks officers from Ivy League schools and other top universities, including partners on AI and space
By Jason MaFebruary 28, 2026
44 minutes ago
AIAnthropic
Anthropic CEO Dario Amodei says ‘we are patriotic Americans’ committed to defending the U.S. but won’t budge on ‘red lines’
By Jason MaFebruary 28, 2026
5 hours ago
OpenAI CEO Sam Altman
AIAnthropic
OpenAI sweeps in to ink deal with Pentagon as Anthropic is designated a ‘supply chain risk’—an unprecedented action likely to crimp its growth
By Jeremy KahnFebruary 28, 2026
9 hours ago
world's fair
CommentaryRobots
Something big is happening in AI, but panic is the wrong reaction
By Peter CappelliFebruary 28, 2026
13 hours ago
AIMarkets
The week the AI scare turned real and America realized maybe it isn’t ready for what’s coming
By Nick LichtenbergFebruary 28, 2026
14 hours ago
AIFinance
She joined Block to build AI. Weeks later, AI cost her job.
By Sheryl EstradaFebruary 28, 2026
14 hours ago

Most Popular

placeholder alt text
Success
Japanese companies are paying older workers to sit by a window and do nothing—while Western CEOs demand super-AI productivity just to keep your job
By Orianna Rosa RoyleFebruary 27, 2026
1 day ago
placeholder alt text
Success
Walmart exec says U.S. workforces needs to take inspiration from China where ‘5 year-olds are learning DeepSeek’
By Preston ForeFebruary 27, 2026
2 days ago
placeholder alt text
Middle East
Iran is now on 'death ground' amid existential threat from U.S. attacks and could 'go big' in retaliation, former NATO commander warns
By Jason MaFebruary 28, 2026
7 hours ago
placeholder alt text
AI
The week the AI scare turned real and America realized maybe it isn't ready for what's coming
By Nick LichtenbergFebruary 28, 2026
14 hours ago
placeholder alt text
Personal Finance
Current price of gold as of February 27, 2026
By Danny BakstFebruary 27, 2026
1 day ago
placeholder alt text
Law
China's government intervenes to show Michigan scientists were carrying worms, not biological materials
By Ed White and The Associated PressFebruary 26, 2026
2 days ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.