• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechOpenAI

Exclusive: OpenAI promised 20% of its computing power to combat the most dangerous kind of AI—but never delivered, sources say

Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
May 21, 2024, 6:53 AM ET
OpenAI cofounder and CEO Sam Altman
OpenAI cofounder and CEO Sam Altman. The company publicly committed to spend 20% of its computing resources figuring out how to control the most dangerous form of AI. But the team dedicated to that task never got the computing resources it was promised, sources familiar with its work tell Fortune.Stefan Wermuth—Bloomberg/Getty Images

In July 2023, OpenAI unveiled a new team dedicated to ensuring that future AI systems that might be more intelligent than all humans combined could be safely controlled. To signal how serious the company was about this goal, it publicly promised to dedicate 20% of its then-available computing resources to the effort.

Recommended Video

Now, less than a year later, that team, which was called Superalignment, has been disbanded amid staff resignations and accusations that OpenAI is prioritizing product launches over AI safety. According to a half-dozen sources familiar with the functioning of OpenAI’s Superalignment team, OpenAI never fulfilled its commitment to provide the team with 20% of its computing power.

Instead, according to the sources, the team repeatedly saw its requests for access to graphics processing units, the specialized computer chips needed to train and run AI applications, turned down by OpenAI’s leadership, even though the team’s total compute budget never came close to the promised 20% threshold.

The revelations call into question how serious OpenAI ever was about honoring its public pledge, and whether other public commitments the company makes should be trusted. OpenAI did not respond to requests to comment for this story.

The company is currently facing a backlash over its use of a voice for its AI speech generation features that is strikingly similar to actress Scarlett Johansson’s. In that case questions have been raised about the credibility of OpenAI’s public statements that the similarity between the AI voice it calls “Sky” and Johansson’s voice is purely coincidental. Johansson says OpenAI cofounder and CEO Sam Altman approached her last September, when the Sky voice was first debuted, asking permission to use her voice. She declined. And she says Altman asked again for permission to use her voice last week, just before a closely watched demonstration of its latest GPT-4o model, which used the Sky voice. OpenAI has denied using Johansson’s voice without her permission, saying it paid a professional actress, whose name it says it cannot legally disclose, to create Sky. But Johansson’s claims have now cast doubt on this—with some speculating on social media that OpenAI in fact cloned Johansson’s voice or perhaps blended another actress’s voice with Johansson’s in some way to create Sky.

OpenAI’s Superalignment team had been set up under the leadership of Ilya Sutskever, the OpenAI cofounder and former chief scientist, whose departure from the company was announced last week. Jan Leike, a longtime OpenAI researcher, co-led the team. He announced his own resignation Friday, two days after Sutskever’s departure. The company then told the remaining employees on the team—which numbered about 25 people—that it was being disbanded and that they were being reassigned within the company.

It was a swift downfall for a team whose work OpenAI had positioned less than a year earlier as vital for the company and critical for the future of civilization. Superintelligence is the idea of a future, hypothetical AI system that would be smarter than all humans combined. It is a technology that would lie even beyond the company’s stated goal of creating artificial general intelligence, or AGI—a single AI system as smart as any person.

Superintelligence, the company said when announcing the team, could pose an existential risk to humanity by seeking to kill or enslave people. “We don’t have a solution for steering and controlling a potentially superintelligent AI, and preventing it from going rogue,” OpenAI said in its announcement. The Superalignment team was supposed to research those solutions.

It was a task so important that the company said in its announcement that it would commit “20% of the compute we’ve secured to date over the next four years” to the effort.

But a half-dozen sources familiar with the Superalignment team’s work said that the group was never allocated this compute. Instead, it received far less in the company’s regular compute allocation budget, which is reassessed quarterly.

One source familiar with the Superalignment team’s work said that there were never any clear metrics around exactly how the 20% amount was to be calculated, leaving it subject to wide interpretation. For instance, the source said the team was never told whether the promise meant “20% each year for four years” or “5% a year for four years” or some variable amount that could wind up being “1% or 2% for the first three years, and then the bulk of the commitment in the fourth year.” In any case, all the sources Fortune spoke to for this story confirmed that the Superalignment team was never given anything close to 20% of OpenAI’s secured compute as of July 2023.

OpenAI researchers can also make requests for what is known as “flex” compute—access to additional GPU capacity beyond what has been budgeted—to deal with new projects between the quarterly budgeting meetings. But flex requests from the Superalignment team were routinely rejected by higher-ups, these sources said.

Bob McGrew, OpenAI’s vice president of research, was the executive who informed the team that these requests were being declined, the sources said, but others at the company, including chief technology officer Mira Murati, were involved in making the decisions. Neither McGrew nor Murati responded to requests to comment for this story.

While the team did carry out some research—it released a paper detailing its experiments in successfully getting a less powerful AI model to control a more powerful one in December 2023—the lack of compute stymied the team’s more ambitious ideas, the source said.

After resigning, Leike on Friday published a series of posts on X (formerly Twitter) in which he criticized his former employer, saying “safety culture and processes have taken a backseat to shiny products.” He also said that “over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.”

Five sources familiar with the Superalignment team’s work backed up Leike’s account, saying that the problems with accessing compute worsened in the wake of the pre-Thanksgiving showdown between Altman and the board of the OpenAI nonprofit foundation.

Sutskever, who was on the board, had voted to fire Altman and was the person the board chose to give Altman the news. When OpenAI’s staff rebelled in response to the decision, Sutskever subsequently posted on X that he “deeply regretted” his participation in Altman’s firing. Ultimately, Altman was rehired, and Sutskever and several other board members involved in his dismissal stepped down from the board. Sutskever never returned to work at OpenAI following Altman’s rehiring but had not formally left the company until last week.

One source disputed the way the other sources Fortune spoke to characterized the compute problems the Superalignment team faced, saying they predated Sutskever’s participation in the failed coup, plaguing the group from the get-go.

While there have been some reports that Sutskever was continuing to co-lead the Superalignment team remotely, sources familiar with the team’s work said this was not the case and that Sutskever had no access to the team’s work and played no role in directing the team after Thanksgiving.

With Sutskever gone, the Superalignment team lost the only person on the team who had enough political capital within the organization to successfully argue for its compute allocation, the sources said. 

In addition to Leike and Sutskever, OpenAI has lost at least six other AI safety researchers from different teams in recent months. One researcher, Daniel Kokotajlo, told news site Vox that he “gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.” 

In response to Leike’s comments, Altman and cofounder Greg Brockman, who is OpenAI’s president, posted on X that they were “grateful to [Leike] for everything he’s done for OpenAI.” The two went on to write, “We need to keep elevating our safety work to match the stakes of each new model.”

They then laid out their view of the company’s approach to AI safety going forward, which would involve a much greater emphasis on testing models currently under development than trying to develop theoretical approaches on how to make future, more powerful models safe. “We need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities,” Brockman and Altman wrote, adding that “empirical understanding can help inform the way forward.”

The people who spoke to Fortune did so anonymously, either because they said they feared losing their jobs, or because they feared losing vested equity in the company, or both. Employees who have left OpenAI have been forced to sign separation agreements that include a strict non-disparagement clause that says the company can claw back their vested equity if they criticize the company publicly, or if they even acknowledge the clause’s existence. And employees have been told that anyone who refuses to sign the separation agreement will forfeit their equity as well.

After Vox reported on these separation terms, Altman posted on X that he had been unaware of that provision and was “genuinely embarrassed” by that fact. He said OpenAI had never attempted to enforce the clause and claw back anyone’s vested equity. He said the company was in the process of updating its exit paperwork to “fix” the issue and that any past employee concerned about the provisions in the exit paperwork they signed could approach him directly about it and it would be changed. 

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Author
Jeremy Kahn
By Jeremy KahnEditor, AI
LinkedIn iconTwitter icon

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

See full bioRight Arrow Button Icon

Latest in Tech

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Most Popular

placeholder alt text
Success
After decades in the music industry, Pharrell Williams admits he never stops working: ‘If you do what you love everyday, you’ll get paid for free'
By Emma BurleighFebruary 3, 2026
3 days ago
placeholder alt text
Politics
Peter Thiel warns the Antichrist and apocalypse are linked to the ‘end of modernity’ currently happening—and cites Greta Thunberg as a driving example
By Nick LichtenbergFebruary 4, 2026
1 day ago
placeholder alt text
Investing
Ray Dalio warns the world is ‘on the brink’ of a capital war of weaponizing money—and gold is the best way for people to protect themselves
By Sasha RogelbergFebruary 4, 2026
1 day ago
placeholder alt text
Crypto
Bitcoin demand in Nancy Guthrie disappearance shows how crypto is becoming a more frequent feature of physical crimes
By Carlos GarciaFebruary 4, 2026
2 days ago
placeholder alt text
Investing
Tech stocks go into free fall as it dawns on traders that AI has the ability to cut revenues across the board
By Jim EdwardsFebruary 4, 2026
2 days ago
placeholder alt text
Economy
Trump is giving the U.S. economy a $65 billion tax-refund shot in the arm, mostly for higher-income people, BofA says
By Nick LichtenbergFebruary 5, 2026
16 hours ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.


Latest in Tech

Sam Altman speaking into a mic.
AIOpenAI
OpenAI’s new model leaps ahead in coding capabilities—but raises unprecedented cybersecurity risks
By Sharon GoldmanFebruary 5, 2026
10 hours ago
tiktok
CybersecuritySocial Media
Gen Z is rebelling against TikTok USA by installing another app—founded by an Oracle alum
By Nick LichtenbergFebruary 5, 2026
11 hours ago
Amodei
Big TechBattle for Talent
Tech giants are shelling out up to $400K for AI evangelists to defend against surging American skepticism
By Jake AngeloFebruary 5, 2026
13 hours ago
Amazon CEO Andy Jassy
AIEye on AI
Hey Alexa—Amazon may be teaming up with OpenAI. Here’s why that matters
By Sharon GoldmanFebruary 5, 2026
14 hours ago
Palmer Luckey,
SuccessCareers
Forget a degree—$30 billion defense startup Anduril will fast-track your job application if you can win its AI drone-flying contest
By Preston ForeFebruary 5, 2026
14 hours ago
lewis, lee
InvestingMarkets
Michael Lewis and Tom Lee hold court on the $1 trillion software-stock carnage: ‘I think fear is not a bad thing to be long right now’
By Nick LichtenbergFebruary 5, 2026
15 hours ago