Anthropic is suing the Department of Defense and other federal agencies after the Trump administration formally designated the company a “supply chain risk” late last week.
The development is the latest development in a disagreement between the Pentagon and Anthropic over how the Trump Administration’s use of its AI technology with big implications for the control of AI overall and the relationship between business and government.
Anthropic had been trying to ensure that the government does not use its AI model Claude for domestic mass surveillance and autonomous weapons. The Pentagon, which has been using Claude for a variety of purposes, including processing intelligence, wanted these restrictions removed from Anthropic’s existing contract and for Anthropic to agree to a new contract in which it allowed the military to deploy Claude for “all lawful use.”
Anthropic refused to agree to these terms. In response, the Trump Administration cancelled the company’s deals with the government and designated it a supply chain risk, a label historically reserved for companies tied to foreign adversaries. The designation means that no defense contractors can use Anthropic’s technology in the completion of any work for the Department of War.
Pete Hegseth, the secretary of war, had said that the military would stop using Claude “immediately,” but also that there would be a six month phase out of the technology to prevent a disruption in critical operations. The military has reportedly been using Claude in its current war with Iran to help process intelligence and targeting data. Hegseth had also said that the supply chain risk designation would mean that defense contractors would have to sever all commercial ties to Anthropic, something most legal experts have said is not supported by the statute on supply chain risks and Anthropic has said that the formal designation it received from the Pentagon applied only to work on defense contracts and not other commercial work unrelated to those contracts.
The lawsuit, filed Monday in the U.S. District Court for the Northern District of California, calls the administration’s actions “unprecedented and unlawful” and claims they threaten to harm “Anthropic irreparably.” The complaint claims that government contracts are already being canceled and that private contracts are also in doubt, putting “hundreds of millions of dollars” at near-term risk.
An Anthropic spokesperson told Fortune: “Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners.”
“We will continue to pursue every path toward resolution, including dialogue with the government,” they added. The Department of War told Fortune is does not comment on litigation as a matter of policy.
The supply chain risk designation requires defense vendors and contractors to certify they are not using Anthropic’s models in their Pentagon work. Trump also directed federal agencies in a social media post to “immediately cease” all use of Anthropic’s technology, writing in a Truth Social post: “WE will decide the fate of our Country – NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about.”
Legal experts have questioned whether the supply chain risk designation is legally sound. In an article in the non-profit publication Lawfare, lawyers Michael Endrias and Alan Z. Rozenshtein argued the designation “exceeds what the statute authorizes,” that “the required findings don’t hold up,” and that Hegseth’s own public statements “may have doomed the government’s litigation posture before it even begins.”
“The government cannot simultaneously claim a vendor poses an acute supply chain threat requiring emergency exclusion and that it’s perfectly safe to keep using the vendor for half a year,” they wrote, calling the overall designation “political theater: a show of force that will not stick.”
Complicating matters, just hours after the Anthropic-Pentagon negotiations fell apart, OpenAI struck its own deal with the Department of War, seemingly agreeing to provide its models without the contractual limitations Anthropic had been insisting upon but with what OpenAI has said are additional contractual and technical safeguards that will, in practice, result in the same restrictions on the use of its technology that Anthropic had been trying to achieve.
The deal quickly drew sharp criticism, with many questioning whether OpenAI’s contract language offered meaningfully different protections than what Anthropic had rejected. OpenAI later acknowledged the announcement looked “sloppy and opportunistic” and said it was renegotiating some of its terms.
Relations between the two rival companies have since deteriorated. In an internal memo reported by The Information, Amodei called OpenAI staff “gullible” and accused the company’s leadership of spreading “straight up lies.” Amodei later apologized for the message, saying the memo had been written just hours after negotiations had fallen apart and did not reflect his “careful or considered views.”










