• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
AIAnthropic

OpenAI sweeps in to ink deal with Pentagon as Anthropic is designated a ‘supply chain risk’—an unprecedented action likely to crimp its growth

Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
February 28, 2026, 10:45 AM ET
OpenAI CEO Sam Altman
OpenAI CEO Sam Altman. Altman announced Friday his company had secured a coveted Pentagon contract hours after the Department of War designated its arch-rival Anthropic a 'supply chain risk,' stripping it of its own military contracts and mandating that all defense contractors stop using Anthropic's AI models. Kyle Grillot—Bloomberg via Getty Images

OpenAI announced late Friday it reached a deal for the Pentagon to use its AI models in classified systems, just hours after the U.S. government designated OpenAI arch-rival Anthropic a “supply chain risk” in a move that threatens to deal a serious blow to Anthropic’s business.

Legal and policy experts said the government’s unprecedented decision presents profound questions about the relationship between the government and business in the U.S. It is the first time the U.S. has ever designated an American company a supply chain risk, and the first time the designation has been used in apparent retaliation for a business not agreeing to certain contractual terms. Anthropic said in a statement Friday that it would take legal action to try to overturn the Pentagon’s designation.

Recommended Video

In a statement announcing its deal, OpenAI CEO Sam Altman said that its agreement with the Pentagon contains the same two limitations on how the military can use its technology that Anthropic had been insisting on and which the government has said it could not accept.

But OpenAI seems to have sought to enshrine these in the agreement in a different way than Anthropic. While Anthropic tried to have the limits spelled out explicitly in the contract, OpenAI agreed that the Pentagon could use its tech for “any lawful purpose,” while Altman also says of the limitations that OpenAI “put them into our agreement.”

It is unclear exactly how both these things could be true or how the limitations are stated in the agreement. But it may simply be that the contract language highlights that current U.S. law prohibits the Pentagon from deploying A.I. for mass surveillance of Americans and current U.S. military policy states that humans must retain “appropriate levels of human judgment” over the use of lethal force.

OpenAI also said that the Pentagon agreed that the company could build technical solutions into its AI models intended to prevent them from being used for either mass surveillance of U.S. citizens or deployed in lethal autonomous weapons.

“We are asking the [Department of War] to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept,” Altman said. Some commentators interpreted Altman’s remark as a veiled criticism of Anthropic, which had not agreed to these terms previously and instead insisted on explicit contractual restrictions on how its models could be used.

Altman had previously publicly supported Anthropic’s position on the limitations it was seeking. Numerous OpenAI employees also signed an open letter supporting Anthropic CEO Dario Amodei’s insistence that its models not be used for mass surveillance or autonomous weapons.

The potential impact of a ‘supply chain risk’ designation

The extent of the damage to Anthropic’s business of the “supply chain risk” designation remained unclear over the weekend. Anthropic had a $200 million contract with the Pentagon that has now been cancelled. But that is not a huge blow to a company that is reportedly on track to generate at least $18 billion in revenue this year.

Instead, the larger concern is the extent to which other enterprises will have to stop using Anthropic’s technology. President Trump said on Truth Social that all federal departments were being ordered to stop using Anthropic’s AI immediately, but with a six-month phase in of the order to prevent disruption. Total federal technology spending is about $140 billion per year, but the amount the U.S. government currently spends on AI is a fraction of that.

The greatest danger, though, is posed by how Pete Hegseth, Secretary of War, has interpreted the supply chain risk designation and its impact. Hegseth said in a social media post that “effective immediately, no contractor, supplier, or partner that does business with the United State military may conduct any commercial activity with Anthropic.”

If that interpretation stands, it would do potentially catastrophic damage to Anthropic’s business, because many of the large enterprises that have been rapidly adopting Anthropic’s Claude models for software coding and other use cases also do some business with the U.S. military. It might also mean that companies such as Amazon, Google, and Nvidia that have invested billions of dollars into Anthropic would have to divest from the company, potentially leaving it with a large funding hole and making it difficult to raise further funds from U.S. investors.

Anthropic earlier this month announced it had closed a new $30 billion venture capital funding round that valued the company at $380 billion. It has reportedly been hiring financial and legal advisors for a potential IPO that could come late this year or early next. But its fight with the Pentagon now casts a pall over that prospect.

Many legal analysts and AI policy experts questioned Hegseth’s broad interpretation of the “supply chain risk” designation. Peter Harrell, a former Biden administration National Security Council official and a visiting scholar at Georgetown University Law School, posted on X that DoW’s supply chain risk designation applies only to work on Department of War contracts. “DoW can’t, legally, tell its contractors ‘don’t use Anthropic even in your private contracts,’” Harrell said.

Dean Ball, a senior fellow at the Foundation for American Innovation and a former AI policy advisor to the Trump administration, said in a post on X that Hegseth’s interpretation of the supply chain risk designation was “almost surely illegal” and amounted to “attempted corporate murder.” He said Hegseth’s actions—which he called “a psychotic power grab”—sent a terrible message to any business about whether it should ever risk doing business with the U.S. government.

Several legal experts noted that even a more narrowly-interpreted decision to designate Anthropic a supply chain risk may not survive a legal challenge. Charlie Bullock, a senior research fellow at the Institute for Law & AI, told Wired that the government cannot make the designation without having completed a risk assessment—something which it is unclear if the government conducted—and notifying Congress prior to taking action, something that also doesn’t seem to have occurred.

Amos Toh, a senior counsel at the Brennan Center for Justice at New York University, was also among several legal experts who said that the supply chain risk designation requires the government to prove that there is a risk of sabotage, subversion, or manipulation of operations by an adversary. “It is not at all clear how adversaries could exploit Anthropic’s usage restrictions on Claude to sabotage military systems,” Toh told the defense news site DefenseScoop. The statute also requires that the Pentagon have exhausted any alternative, less intrusive courses of action to mitigate the risk prior to making the supply chain risk finding. Toh questioned whether the Pentagon could reasonably claim to have made a “good faith effort” to pursue less intrusive measures, given how quickly the Anthropic dispute escalated over the past few days.

Even if Anthropic ultimately prevails in challenging the supply chain risk designation in court, the damage to its business may be done. ”It will take years to resolve in court. And in the meantime, every general counsel at every Fortune 500 company with any Pentagon exposure is going to ask one question: is using Claude worth the risk?” Shenaka Anslem Perera, an independent analyst with a large social media following, posted on X.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Author
Jeremy Kahn
By Jeremy KahnEditor, AI
LinkedIn iconTwitter icon

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

See full bioRight Arrow Button Icon

Latest in AI

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Fortune Secondary Logo
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Fortune Crypto
  • Features
  • Leadership
  • Health
  • Commentary
  • Success
  • Retail
  • Mpw
  • Tech
  • Lifestyle
  • CEO Initiative
  • Asia
  • Politics
  • Conferences
  • Europe
  • Newsletters
  • Personal Finance
  • Environment
  • Magazine
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
Fortune Secondary Logo
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Latest in AI

OpenAI CEO Sam Altman
AIAnthropic
OpenAI sweeps in to ink deal with Pentagon as Anthropic is designated a ‘supply chain risk’—an unprecedented action likely to crimp its growth
By Jeremy KahnFebruary 28, 2026
1 hour ago
world's fair
CommentaryRobots
Something big is happening in AI, but panic is the wrong reaction
By Peter CappelliFebruary 28, 2026
6 hours ago
AIMarkets
The week the AI scare turned real and America realized maybe it isn’t ready for what’s coming
By Nick LichtenbergFebruary 28, 2026
7 hours ago
AIFinance
She joined Block to build AI. Weeks later, AI cost her job.
By Sheryl EstradaFebruary 28, 2026
7 hours ago
sam altman
AIOpenAI
OpenAI strikes a deal with the Pentagon just hours after Trump orders the end of Anthropic contracts, and hours after a staff all-hands
By Sharon GoldmanFebruary 27, 2026
18 hours ago
Future of Workthe future of work
Have good taste? It may just get you a job during the AI jobs apocalypse, says Sam Altman
By Marco Quiroz-GutierrezFebruary 27, 2026
18 hours ago

Most Popular

placeholder alt text
Success
Japanese companies are paying older workers to sit by a window and do nothing—while Western CEOs demand super-AI productivity just to keep your job
By Orianna Rosa RoyleFebruary 27, 2026
1 day ago
placeholder alt text
Success
Walmart exec says U.S. workforces needs to take inspiration from China where ‘5 year-olds are learning DeepSeek’
By Preston ForeFebruary 27, 2026
1 day ago
placeholder alt text
Law
China's government intervenes to show Michigan scientists were carrying worms, not biological materials
By Ed White and The Associated PressFebruary 26, 2026
2 days ago
placeholder alt text
Commentary
'The Pitt': a masterclass display of DEI in action 
By Robert RabenFebruary 26, 2026
2 days ago
placeholder alt text
Economy
Come 2030, the U.S. deficit will be worth 5.9% of GDP—more than spending on Social Security, and equal to major health programs
By Eleanor PringleFebruary 26, 2026
2 days ago
placeholder alt text
Personal Finance
Current price of gold as of February 27, 2026
By Danny BakstFebruary 27, 2026
1 day ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.