• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechAI

OpenAI’s ex-policy lead accuses the company of ‘rewriting’ its AI safety history

By
Beatrice Nolan
Beatrice Nolan
Tech Reporter
Down Arrow Button Icon
By
Beatrice Nolan
Beatrice Nolan
Tech Reporter
Down Arrow Button Icon
March 7, 2025, 2:28 PM ET
OpenAI CEO Sam Altman stands in front of a blue and yellow background on stage.
A former policy lead at OpenAI is accusing the company of rewriting its history with a new post about the company's approach to safety and alignment.
  • OpenAI’s former policy lead Miles Brundage has accused the company of “rewriting history” around how it approached the launch of its GPT-2 model in 2019. He argues that a new OpenAI blog post on AI safety subtly tips the company in the direction of releasing AI models unless there is incontrovertible evidence that they present an immediate danger.

A former policy lead at OpenAI is accusing the company of rewriting its history with a new post about the company’s approach to safety and alignment.

Recommended Video

Miles Brundage, former head of policy research at OpenAI, criticized a recent post published by the company titled “How we think about safety and alignment.”

In it, the company described the road to artificial general intelligence (AGI)—an AI system that can perform all the cognitive tasks as well or better than a person—as a continuous evolution, rather than a sudden leap. It also emphasized the value of “iterative deployment,” which involves releasing AI systems, learning from how users interact with them, and then refining safety measures based on this evidence.

While Brundage praised the “bulk” of the post, he criticized the company for rewriting the “history of GPT-2 in a concerning way.”

GPT-2, released in February 2019, was the second iteration of OpenAI’s flagship large language model. At the time, it represented a much larger and more capable model than its successor, GPT-1, and was trained on a much broader dataset. But compared to subsequent GPT models, particularly GPT-3.5, the model that powered ChatGPT, GPT-2 was not particularly capable. It could write poetry and several coherent paragraphs of prose, but ask it to generate more than that and its outputs often descended into strange non sequiturs or gibberish. It was particularly good at answering factual questions or summarization or coding, or most of the tasks that people are now addressing using LLMs.

Nonetheless, OpenAI initially withheld GPT-2’s full release and source code, citing concerns about the potential for dangerous misuse of the model. Instead, it gave a select number of news outlets limited access to a demo version of the model.

At the time, critics, including many AI researchers in academia, argued OpenAI’s claims that the model presented a significantly increased risk of misuse were overblown or disingenuous. Some questioned whether OpenAI’s claims were a publicity stunt—an underhanded way of hyping the unreleased model’s capabilities and of ensuring that OpenAI’s announcement were generate lots of headlines.

One AI-focused publication even penned an open letter urging OpenAI to release GPT-2, arguing its importance outweighed the risks. Eventually, OpenAI rolled out a partial version, followed by a full release months later.

In its recent safety post, OpenAI said the company didn’t release GPT-2 due to “concerns about malicious applications.” But it then essentially argued that some of OpenAI’s former critics had been right and that the company’s concerns about misuse had proved overblown and unnecessary. And it tried to argue that some of that excess of concern came from the fact that many of the company’s AI safety researchers and policy staff assumed AGI would emerge suddenly, with one model suddenly leaping over the threshold to human-like intelligence, instead of emerging gradually.

“In a discontinuous world, practicing for the AGI moment is the only thing we can do, and safety lessons come from treating the systems of today with outsized caution relative to their apparent power. This is the approach we took for GPT‑2,” OpenAI wrote.

However, Brundage, who was at the company when the model was released and was intimately involved with discussion about how the company would handle its release, argued that GPT-2’s launch “was 100% consistent and foreshadowed OpenAI’s current philosophy of iterative deployment.”

“The model was released incrementally, with lessons shared at each step. Many security experts at the time thanked us for this caution,” Brundage wrote on X.

He dismissed the idea that OpenAI’s caution with GPT-2 was unnecessary or based on outdated assumptions about AGI. “What part of that was motivated by or premised on thinking of AGI as discontinuous? None of it,” he wrote.

Brundage argued that the post’s revisionist history serves to subtly bias the company in the direction of dismissing the concerns of AI safety researchers and releasing AI models, unless there is incontrovertible evidence that they present an immediate danger.

“It feels as if there is a burden of proof being set up in this section where concerns are alarmist and you need overwhelming evidence of imminent dangers to act on them—otherwise, just keep shipping,” he said. “That is a very dangerous mentality for advanced AI systems.”

“If I were still working at OpenAI, I would be asking why this blog post was written the way it was, and what exactly OpenAI hopes to achieve by pooh-poohing caution in such a lopsided way,” he wrote.

OpenAI’s new approach to AI safety

OpenAI’s blog post introduced two new ideas: the importance of iterative deployment, and a slightly different approach to testing it’s AI models.

Robert Trager, the co-director of the Oxford Martin AI Governance Initiative, told Fortune that the company appeared to be distancing itself from relying heavily on theory when testing its models.

“It was like they were saying, we’re not going to rely on math proving that the system is safe. We’re going to rely on testing the system in a secure environment,” he said.

“It makes sense to rely on all the tools that we have,” he added. “So it’s strange to say we’re not going to rely so much on that tool.”

Trager also said that iterative deployment works best when models are being deployed very often with minor changes between each release. However, he noted that this kind of approach may not be practical for OpenAI as some systems could be significantly different from what was deployed in the past.

“Their argument that there really won’t be much of an impact, or a differential impact, from one system to the next; it doesn’t seem quite to be convincing,” he said.

Hamza Chaudhry, the AI and National Security lead at the Future of Life Institute, a nonprofit that has raised concerns about AI’s potential risk to humanity, said that “relying on gradual rollouts may mean that potentially harmful capabilities and behaviors are exposed to the real world before being fully mitigated.”

OpenAI also did not mention “staged deployment” in its blog post, which generally means releasing a model in various stages and evaluating it along the way: for example, allowing a small group of internal testers to access an AI model and accessing the results before releasing it to a larger set of users.

“The impression it makes is that they’re offering potential future justifications for actions that aren’t necessarily consistent with what their safety standards have been in the past. And I would say that overall, they haven’t made the case that new standards are better than earlier standards,” Trager said.

Chaudhry said that OpenAI’s approach to safety amounted to “reckless experimenting on the public”—something that would not be allowed in any other industry. He also said this was “part and parcel of a broader push from OpenAI to minimize real government oversight over advanced high-stakes AI systems.”

The post has been criticized by other prominent figures in the industry. Gary Marcus, professor emeritus of psychology and neural science at New York University, told Fortune the blog felt like “marketing” rather than an attempt to explain any new safety approaches.

“It’s a way to hype AGI,” he said. “And it’s an excuse to dump stuff in the real world rather than properly sandboxing it before releasing and making sure it is actually okay. The blog is certainly not an actual solution to the many challenges of AI safety.”

OpenAI has been under pressure over AI safety concerns

Over the past year, OpenAI has faced criticism from some AI experts for prioritizing product development over safety.

Several former OpenAI employees have quit over internal AI safety disputes, including prominent AI researcher Jan Leike.

Leike left with the company last year at the same time as OpenAI cofounder Ilya Sutskever. He openly blamed the lack of safety prioritization at the company for his departure, claiming that over the past few years, “safety culture and processes have taken a backseat to shiny products.” Leike and Sutskever were co-leading the company’s Superalignment team at the time, which was focused on the long-term risks of superpowerful artificial intelligence that would be more capable than all humanity. After the pair parted ways with the company, the team was dissolved.

Internally, employees said that OpenAI had failed to give safety teams the compute it had promised. In May last year a half-dozen sources familiar with the functioning of the Superalignment team told Fortune that OpenAI never fulfilled an earlier commitment to provide the safety team with 20% of its computing power.

The internal disagreements over AI safety have also resulted in an exodus of safety-focused employees. Daniel Kokotajlo, a former OpenAI governance researcher, told Fortune in August that nearly half of the company’s staff who once focused on the long-term risks of superpowerful AI had left the company.

Marcus said that OpenAI had failed to live up to its purported principles and “instead…repeatedly prioritized profit over safety (which is presumably part of why so many safety-conscious employees left).”

“For years, OpenAI has been pursuing a “black box” technology that probably can’t ever be properly aligned, and done little to seriously consider alternative, more transparent technologies that might be less short-term profitable but safer for humanity in the long run,” he said.

Representatives for OpenAI did not respond to Fortune‘s request for comment. Brundage declined to provide further comments.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Author
By Beatrice NolanTech Reporter
Twitter icon

Beatrice Nolan is a tech reporter on Fortune’s AI team, covering artificial intelligence and emerging technologies and their impact on work, industry, and culture. She's based in Fortune's London office and holds a bachelor’s degree in English from the University of York. You can reach her securely via Signal at beatricenolan.08

See full bioRight Arrow Button Icon

Latest in Tech

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.


Most Popular

placeholder alt text
Economy
'I just don't have a good feeling about this': Top economist Claudia Sahm says the economy quietly shifted and everyone's now looking at the wrong alarm
By Eleanor PringleJanuary 31, 2026
1 day ago
placeholder alt text
Success
Ryan Serhant starts work at 4:30 a.m.—he says most people don’t achieve their dreams because ‘what they really want is just to be lazy’
By Preston ForeJanuary 31, 2026
23 hours ago
placeholder alt text
Future of Work
Ford CEO has 5,000 open mechanic jobs with up to 6-figure salaries from the shortage of manually skilled workers: 'We are in trouble in our country'
By Marco Quiroz-GutierrezJanuary 31, 2026
20 hours ago
placeholder alt text
Success
Alexis Ohanian walked out of the LSAT 20 minutes in, went to a Waffle House, and decided he was 'gonna invent a career.' He founded Reddit
By Preston ForeJanuary 31, 2026
20 hours ago
placeholder alt text
Economy
Right before Trump named Warsh to lead the Fed, Powell seemed to respond to some of his biggest complaints about the central bank
By Jason MaJanuary 30, 2026
2 days ago
placeholder alt text
Personal Finance
Current price of silver as of Friday, January 30, 2026
By Joseph HostetlerJanuary 30, 2026
2 days ago

Latest in Tech

Big TechMark Zuckerberg
The Chan Zuckerberg Initiative cut 70 jobs as the Meta CEO’s philanthropy goes all in on mission to ‘cure or prevent all disease’
By Sydney LakeFebruary 1, 2026
57 minutes ago
The founder and CEO of $1.25 billion AI identity verification platform Incode, Ricardo Amper
SuccessGen Z
CEO of $1.25 billion AI company says he hires Gen Z because they’re ‘less biased’ than older generations—too much knowledge is actually bad, he warns
By Emma BurleighFebruary 1, 2026
2 hours ago
Several pictures of people receiving medical treatments including a facelift and oxygen therapy.
HealthSuper Bowl
Hims and Hers Super Bowl ad highlights ‘uncomfortable truth’ about elite healthcare for the rich and ‘broken’ system for the rest
By Jacqueline MunisFebruary 1, 2026
2 hours ago
Elon Musk sits with his hands on his knees in front of a blue "World Economic Forum" background.
Economythe future of work
Musk’s fantasy for a future where work is optional just got more real: UK minister calls for universal basic income to cushion AI-related job losses
By Sasha RogelbergFebruary 1, 2026
3 hours ago
Startups & VentureOpenAI
Nvidia CEO signals investment in OpenAI round may be largest yet
By Debby Wu and BloombergJanuary 31, 2026
12 hours ago
Startups & VentureVenture Capital
Silicon Valley legend Kleiner Perkins was written off. Then an unlikely VC showed up
By Allie GarfinkleJanuary 31, 2026
16 hours ago