• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
AIAnthropic

In its fight with the Pentagon, Anthropic confronts one of the biggest crises of its five-year existence

Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
February 25, 2026, 8:04 PM ET
Anthropic CEO Dario Amodei speaking in front of a microphone.
Anthropic CEO Dario Amodei. His meeting earlier this week with U.S. Secretary of War Pete Hegseth failed to resolve the Pentagon’s conflict with Anthropic over restrictions in how the U.S. military can use the company’s AI models.Prakash Singh—Bloomberg/Getty Images

AI company Anthropic is facing perhaps the biggest crisis in its five-year existence as it stares down a Friday deadline to remove restrictions on how the U.S. Department of War can use its technology or face the possibility that the Pentagon will take action that could cripple its business.

Recommended Video

Pete Hegseth, the U.S. secretary of war, has demanded that Anthropic remove restrictions it currently stipulates in its contracts that prohibit its AI models being used for mass surveillance or from being incorporated into lethal autonomous weapons, which can make decisions to attack without human intervention. Instead, Hegseth wants Anthropic to stipulate that its technology can be used for “any lawful purpose” that the Department of War wishes to pursue.

If the company does not comply by Friday, Hegseth has threatened to not only cancel Anthropic’s existing $200 million contract with his department, but to have the company labeled a “supply-chain risk,” meaning that no company doing business with the Department of War would be allowed to use Anthropic’s models. That could eviscerate Anthropic’s growth—just as the company, which is currently valued at $380 billion, has been seeing significant commercial traction and is contemplating an initial public offering as soon as next year.

A Tuesday meeting between Hegseth and Anthropic CEO Dario Amodei in Washington, D.C., failed to resolve the conflict and ended with Hegseth reiterating his ultimatum.

The dispute comes against a backdrop of sometimes overt hostility toward Anthropic from other Trump administration officials. AI czar David Sacks in particular has publicly attacked the company on social media for representing “woke AI” and the “doomer industrial complex.” Sacks has accused the company of engaging in a “sophisticated regulatory capture strategy based on fearmongering.” His argument is basically that Anthropic executives disingenuously warn of extreme risks from AI systems in order to justify regulations on the technology with which only Anthropic and a few other AI companies can easily comply.

Anthropic’s Amodei has called such views “inaccurate” and insisted that the company shares many policy goals with the Trump administration, including wanting to see the U.S. remain at the forefront of the development of AI technology.

Nonetheless, Sacks and others within the administration may be hoping Hegseth makes good on his threats to blacklist Anthropic from the national security supply chain.

Other AI companies, such as OpenAI and Google, have apparently not imposed restrictions on how the U.S. military uses their tech.

Principles versus pragmatism

Working with the military has been controversial among some technology workers. In 2018, Google faced a vocal staff rebellion over its decision to help the Pentagon with “Project Maven,” an effort to use AI to analyze aerial surveillance imagery. The employee revolt forced Google to pull out of a bid to renew its contract to work on the project. But in the years since, the internet giant has quietly renewed its ties with the defense establishment, and in December, the Department of War announced it would deploy Google’s Gemini AI models for a number of use cases.

Owen Daniels, associate director of analysis at the Center for Security and Emerging Technology (CSET) at Georgetown University, told the Associated Press that “Anthropic’s peers, including Meta, Google, and xAI, have been willing to comply with the department’s policy on using models for all lawful applications. So the company’s bargaining power here is limited, and it risks losing influence in the department’s push to adopt AI.”

But principles may be an unusually powerful motivator for Anthropic employees. The company was founded by a group of researchers who broke away from OpenAI in part because they were concerned that the lab was allowing commercial pressures to divert it from its original mission of ensuring powerful AI is developed for humanity’s benefit. And more recently, Anthropic staked out principled positions on not incorporating advertising into its Claude products and not developing chatbots specifically designed to be romantic or erotic companions.

Given the company’s culture, some outside commentators have speculated that at least some Anthropic staff will resign if the company gives in to Hegseth’s demands and drops the limitations currently built into its government contracts.

Hegseth has also said there is another option available to the Pentagon if Anthropic does not comply with its request voluntarily. This would involve using the Defense Production Act of 1950 to compel Anthropic to offer the military a version of its Claude model without any restrictions in place. 

The DPA, which was originally designed to allow the government to take charge of civilian manufacturing in the event of war, was invoked during the COVID-19 pandemic to compel companies to produce protective equipment and vaccines. Since then, it has been used numerous times, mostly by the Biden administration, even in the absence of a clear national emergency. For instance, in 2023 the Biden White House invoked the DPA to force tech companies to share information about the safety testing of their advanced AI models with the government.

Katie Sweeten, who served until September 2025 as the Department of Justice’s liaison to the Department of Defense and is now a partner at the law firm Scale, told CNN that Hegseth’s position didn’t make sense from a policy perspective. “I would assume we don’t want to utilize the technology that is the supply-chain risk, right? So I don’t know how you square that,” she said.

Dean Ball, who served as an AI policy advisor to the Trump administration, helping to draft its AI Action Plan, and who is now a senior fellow at the Foundation for American Innovation, also called the Pentagon’s position “incoherent” in a post on X. “How can one policy option be ‘supply-chain risk’ (usually used on foreign adversaries) and the other be DPA (emergency commandeering of critical assets)?” he said.

Ball told TechCrunch that imposing the supply-chain risk label would send a terrible message to any company doing business with the government. “It would basically be the government saying, ‘If you disagree with us politically, we’re going to try to put you out of business,’” he said. 

Some legal commentators noted that both sides of the dispute had some legitimate arguments. “We wouldn’t want Lockheed Martin selling the military an F-35 and then telling the Pentagon which missions it could fly,” Alan Rozenshtein, an associate professor of law at the University of Minnesota and a fellow at Brookings, said in a column posted on the site Lawfare.

But Rozenshtein also argued that Congress, not the Pentagon, should set the rules for how the U.S. military deploys AI. “The terms governing how the military uses the most transformative technology of the century are being set through bilateral haggling between a defense secretary and a startup CEO, with no democratic input and no durable constraints,” he wrote.

As of midweek, Anthropic showed no signs of backing down from its position.

Claude’s future at stake

Helen Toner, the interim executive director of Georgetown’s CSET and a former OpenAI board member, posted on X that the Pentagon is likely underestimating the extent to which Anthropic may be reluctant to abandon its position because—as weird as this sounds—doing so might set a bad example for future versions of Claude. Anthropic researchers have increasingly voiced concerns about what each successive version of Claude learns about its own character based on training data that now includes news articles and social media commentary about Claude itself. 

But the company has compromised before when its back has been against the wall. In June 2025, Anthropic faced a potentially existential threat when a federal judge ruled that its use of libraries of pirated books to train its Claude AI models was likely a violation of copyright law. This left the company facing tens of billions of dollars in potential liabilities if it took the case to a full trial and lost. Instead of continuing to fight the case, Anthropic announced a $1.5 billion settlement with the copyright holders.

And just this past week, Anthropic demonstrated again, in a different context, that it is sometimes willing to put pragmatism and commercial imperatives ahead of high-minded principles. The company updated its Responsible Scaling Policy (RSP), dropping a previous commitment to never train an AI model unless it could guarantee it had adequate safety controls in place. The new RSP instead simply commits Anthropic to matching or surpassing the safety efforts being made by competitors. It also says Anthropic will delay developing models if the company believes it has a clear lead over the competition and thinks the model it’s training presents a significant catastrophic risk. Jared Kaplan, Anthropic’s head of research, told Time that “unilateral commitments” no longer made sense if “competitors are blazing ahead.”

Whether Anthropic will make a similar concession to commercial pressures in its fight with the Department of War remains to be seen. 

In 2001, Fortune first convened “The Smartest People We Know,” bringing together CEOs and founders, builders and investors, thinkers and doers. Since then, Fortune Brainstorm Tech has been the place where bold ideas collide. From June 8–10, we will return to Aspen—where it all began—to mark 25 years of Brainstorm. Register now.
About the Author
Jeremy Kahn
By Jeremy KahnEditor, AI
LinkedIn iconTwitter icon

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

See full bioRight Arrow Button Icon

Latest in AI

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Fortune Secondary Logo
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Fortune Crypto
  • Features
  • Leadership
  • Health
  • Commentary
  • Success
  • Retail
  • Mpw
  • Tech
  • Lifestyle
  • CEO Initiative
  • Asia
  • Politics
  • Conferences
  • Europe
  • Newsletters
  • Personal Finance
  • Environment
  • Magazine
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
  • Group Subscriptions
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Latest in AI

AI ‘slop’ is flooding YouTube Kids—and more than 200 groups and experts are calling for a ban
CybersecurityYouTube
AI ‘slop’ is flooding YouTube Kids—and more than 200 groups and experts are calling for a ban
By Catherina GioinoApril 1, 2026
2 hours ago
Deutsche Bank asked AI if it’s true that AI will solve the economy’s inflation problems. The robots answered
Economydisruption
Deutsche Bank asked AI if it’s true that AI will solve the economy’s inflation problems. The robots answered
By Nick LichtenbergApril 1, 2026
2 hours ago
A chip research center site operations manager stands next to a window overlooking the facility.
EnvironmentData centers
Data centers are so hot their ‘heat island’ effect is raising temperatures up to 6 miles away and impacting 343 million people worldwide, study finds
By Sasha RogelbergApril 1, 2026
5 hours ago
How AI will make your Shake Shack order even faster
NewslettersCIO Intelligence
How AI will make your Shake Shack order even faster
By John KellApril 1, 2026
5 hours ago
One humanoid robot handing shaking hands with another humanoid robotic hand. One robot on the left is lighter metal colored than the one on the right.
AIAI agents
AI models will secretly scheme to protect other AI models from being shut down, researchers find
By Jeremy KahnApril 1, 2026
5 hours ago
receipts
EconomyFederal Reserve
‘Inflationary surge’: Fed economists warn AI hype is overheating the economy whether or not the technology ever delivers
By Jake AngeloApril 1, 2026
6 hours ago

Most Popular

Jerome Powell says the $39 trillion national debt is ‘not unsustainable,’ but warns the trajectory ‘will not end well’
Economy
Jerome Powell says the $39 trillion national debt is ‘not unsustainable,’ but warns the trajectory ‘will not end well’
By Fortune EditorsMarch 30, 2026
2 days ago
Two-thirds of parents say their adult Gen Z kids still rely on them financially  for support—even though it's putting them under strain
Success
Two-thirds of parents say their adult Gen Z kids still rely on them financially  for support—even though it's putting them under strain
By Fortune EditorsMarch 31, 2026
1 day ago
Kevin O'Leary says if you earn $68,000 a year and follow this rule, you'll retire a millionaire
Personal Finance
Kevin O'Leary says if you earn $68,000 a year and follow this rule, you'll retire a millionaire
By Fortune EditorsMarch 31, 2026
1 day ago
A man used AI to call 3,000 Irish bartenders to track the cost of Guinness. Now pubs are lowering their prices to compete
AI
A man used AI to call 3,000 Irish bartenders to track the cost of Guinness. Now pubs are lowering their prices to compete
By Fortune EditorsMarch 30, 2026
2 days ago
Hiring just hit a level not seen since the economy was ‘closed down literally’ during COVID, top economist says
Economy
Hiring just hit a level not seen since the economy was ‘closed down literally’ during COVID, top economist says
By Fortune EditorsMarch 31, 2026
1 day ago
Markets cheer as Trump threatens to abandon Iran war, but Jamie Dimon sides with allies: ‘Win this thing and clean up the straits’
Energy
Markets cheer as Trump threatens to abandon Iran war, but Jamie Dimon sides with allies: ‘Win this thing and clean up the straits’
By Fortune EditorsMarch 31, 2026
1 day ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.