• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
AIOpenAI

OpenAI CEO Sam Altman defends decision to strike Pentagon deal after Anthropic blacklisting, admits ‘optics don’t look good’

Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
March 2, 2026, 11:46 AM ET
OpenAI CEO Sam Altman
OpenAI CEO Sam Altman. Over the weekend, Altman held an "ask me anything" session on social platform X to defend OpenAI's decision to sign a deal with the Pentagon just hours after the Department of War blacklisted rival Anthropic as a 'supply chain risk.' Anthropic had wanted guarantees its AI would not be used for mass surveillance or autonomous weapons. OpenAI said it had achieved the same restrictions in a different way in its Pentagon deal. But OpenAI faced a mounting backlash over its agreement.Prakash Singh—Bloomberg/Getty Images

OpenAI CEO Sam Altman and other senior executives took to social media over the weekend to defend their decision, announced on Friday, to strike a deal with the Department of War to allow the company’s models to be used in classified military networks. The deal came hours after archrival Anthropic turned down a similar agreement with the Pentagon and the Trump administration said it was labeling Anthropic a “supply-chain risk.”

Recommended Video

OpenAI faced vocal backlash for agreeing to the Pentagon deal after Altman had earlier in the week voiced support for Anthropic’s position that it would not accept a Pentagon contract that did not contain explicit prohibitions on its AI technology being used for mass surveillance of U.S. citizens or being incorporated into autonomous weapons that can make a decision to strike targets without human oversight.

Some of these critics have even started a campaign to persuade ChatGPT users to stop using that AI model and switch to Anthropic’s Claude chatbot. There was some evidence the campaign was having an effect, too: Claude surged past ChatGPT to become the most downloaded free app in Apple’s App Store. The sidewalk outside OpenAI’s offices in San Francisco was also covered with chalk graffiti attacking its decision to cut a deal with the Pentagon, while graffiti outside Anthropic’s offices largely praised its decision to refuse a contract that did not include prohibitions on the use of its AI models for mass surveillance and autonomous weapons.

Some of Altman’s and OpenAI’s social media push over the weekend seemed aimed at quelling concerns among the company’s own employees over the Pentagon contract. Many rank-and-file OpenAI employees had signed an open letter last week supporting Anthropic’s refusal to accede to the Pentagon’s demands and opposing its decision to designate Anthropic a supply-chain risk. (Altman also said over the weekend that he disagreed with the supply-chain risk designation.)

And at least one OpenAI employee publicly questioned whether the company’s contract with the Pentagon provided robust safeguards. Leo Gao, an OpenAI employee who works on making sure increasingly powerful AI models stay aligned with user intentions and human values, criticized his employer on X for agreeing to let the DOW use its technology for “all lawful purposes” and then engaging in what Gao called “window dressing” to make it seem like there were further restrictions on what the Pentagon could do with OpenAI’s GPT models.

Altman admitted in an “Ask Me Anything” session on social media platform X on Saturday night that its deal with the Pentagon “was definitely rushed, and the optics don’t look good.”

But he insisted that OpenAI moved quickly to make the deal because it wanted to de-escalate the increasingly heated situation between the U.S. military and Anthropic. The fight potentially threatened to damage the AI industry as a whole, in part by raising the prospect of the U.S. government nationalizing an AI lab or at least using its power to coerce a private company to deliver technology on its preferred terms.

“If we are right and this does lead to a de-escalation between the DOW and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry,” Altman said. “If not, we will continue to be characterized as rushed and uncareful.”

He added that “a good relationship between the government and the companies developing this technology is critical over the next couple of years.”

And he said he was opposed to Anthropic being labeled a supply-chain risk. “Enforcing the [supply-chain risk] designation on Anthropic would be very bad for our industry and our country,” Altman said. “To say it very clearly: I think this is a very bad decision from the DOW, and I hope they reverse it. If we take heat for strongly criticizing it, so be it.”

OpenAI said that it had found a compromise approach that preserved the same limitations while also acceding to the military’s wish that it not have contractual constraints on how it uses the AI tech it purchases. The company said that limits on how its AI can be used rest on both references to existing law that it has put in the DOW contract and technical limitations on what its AI models will be able to do.

It said the DOW agreed to let it build these technical limitations. The technical limitations will include systems that would classify any of the prompts DOW users feed OpenAI’s models and refuse any that the classifier deems might violate OpenAI’s redlines. It also may include fine-tuning of OpenAI’s models so that they won’t easily comply with instructions that violate the two redlines. 

OpenAI says its contract attempts to bind Pentagon to current law

OpenAI published a portion of its contract with the DOW in which it said it agreed that its technology could be used “for all lawful purposes” but which also included specific references to existing U.S. laws and Department of War policy documents that establish limitations on the surveillance of U.S. citizens and on how autonomous weapons can be deployed.

Katrina Mulligan, OpenAI’s head of national security partnerships and a former chief of staff to the secretary of the Army, said during the Ask Me Anything on X that referencing these existing laws and policies provided more assurance that the Pentagon would not later violate the company’s redlines than some critics suggested. “We accepted the ‘all lawful uses’ language proposed by the department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract,” she said. “And because laws can change, having this codified in the contract protects against changes in law or policy that we can’t anticipate.”

Some legal experts pushed back on Mulligan’s position, at least as far as DOW policies on autonomous weapons are concerned. Charles Bullock, a senior fellow at the Institute for Law & AI, said on X that “DOW can, of course, change its own policies whenever it wants,” and that the contract language OpenAI released does not require the DOW to follow the existing policy in perpetuity. But he said that the contract did seem to bind DOW to following existing interpretations of existing laws governing mass surveillance of U.S. citizens.

Bullock also said it was impossible to know how ironclad the limitations contained in OpenAI’s contract are without assessing the entire contract, not just the small section OpenAI made public. OpenAI has said government rules bar it from publishing the entire contract because it is for a classified system.

A debate over the definition of ‘mass surveillance’

Many of those skeptical of OpenAI’s agreement with the Pentagon noted that the term “mass surveillance” is not well-defined and questioned OpenAI executives on what would happen if military intelligence agencies attempted to use its AI models to analyze commercially available data—such as cell phone location data or data from fitness apps—that could be put together at scale to conduct surveillance of U.S. citizens in America. The Defense Intelligence Agency is believed to have purchased such data, and its use remains a legal gray area. Anthropic, according to a story in The Atlantic, was particularly concerned about the Pentagon using its technology for this kind of analysis and that its insistence on curtailing that use case was one of the major stumbling blocks to breaking its deadlock with the DOW.

“We can’t protect against a government agency buying commercially available datasets, but our contract incorporates a prohibition on mass domestic surveillance as a binding condition of use,” Mulligan said during the AMA.

She also said that OpenAI’s decision to rely on a multipronged approach that included technical systems to limit what the Pentagon could do provided a more robust solution than simply relying on contractual language, which she said seemed to be Anthropic’s primary approach. She said Anthropic had not been able to lean on this technical solution because it was already providing versions of its AI models to the military that had some of the usual safeguards removed.

“Anthropic has primarily been concerned with usage policies, which is because their existing classified deployments involve reduced or removed safety guardrails (making usage policies the primary safeguards in national security deployments),” she said. “Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. That’s what we pursued in our negotiations, and that’s why we think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.”

Another OpenAI executive, Boaz Barak, who works on AI alignment and safety, also represented the company in the AMA and criticized Anthropic for fixating so heavily on contractual language and not other kinds of safeguards. “I get the impression that folks at Anthropic had unrealistic expectations for the contract stuff,” he said in response to a question from former OpenAI policy chief Miles Brundage, noting that tech companies were always going to be somewhat at the mercy of how the DOW interpreted terms in the contract.

Who should decide how AI is used?

Altman said that many of the questions in the AMA session touched on the issue of whether AI efforts should be nationalized. The OpenAI CEO noted, “It has seemed to me for a long time it might be better if building AGI [artificial general intelligence] were a government project.” But he added, “It doesn’t seem super likely on current trajectory.”

Altman also said he was surprised by how many of OpenAI’s critics seemed to have more faith in unelected tech executives making decisions about the appropriate use of AI rather than government officials who were, at least in theory, accountable to Congress and ultimately voters. 

“I very deeply believe in the democratic process, and that our elected leaders have the power, and that we all have to uphold the Constitution. I am terrified of a world where AI companies act like they have more power than the government,” Altman said on X. “I would also be terrified of a world where our government decided mass domestic surveillance was okay.”

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Author
Jeremy Kahn
By Jeremy KahnEditor, AI
LinkedIn iconTwitter icon

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

See full bioRight Arrow Button Icon

Latest in AI

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Fortune Secondary Logo
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Fortune Crypto
  • Features
  • Leadership
  • Health
  • Commentary
  • Success
  • Retail
  • Mpw
  • Tech
  • Lifestyle
  • CEO Initiative
  • Asia
  • Politics
  • Conferences
  • Europe
  • Newsletters
  • Personal Finance
  • Environment
  • Magazine
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
  • Group Subscriptions
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
Fortune Secondary Logo
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Latest in AI

OpenAI CEO Sam Altman
AIOpenAI
OpenAI CEO Sam Altman defends decision to strike Pentagon deal after Anthropic blacklisting, admits ‘optics don’t look good’
By Jeremy KahnMarch 2, 2026
1 hour ago
greenspan
EconomyFederal Reserve
’90s nostalgia seizes the Fed and White House as Warsh and Trump see AI as an internet-style productivity boom
By Paul Wiseman and The Associated PressMarch 2, 2026
5 hours ago
washington
LawWashington
Washington roasted for using AI feature with heavily accented English instead of actual Spanish on state helpline
By Cedar Attanasio and The Associated PressMarch 2, 2026
5 hours ago
Europedigital transformation
Why Europe can lead in trusted, industrialized AI
By Dave McCannMarch 2, 2026
6 hours ago
Electrician apprentices at work.
Future of WorkCareers
A dire electrician shortage is a ‘life-or-death’ threat to the AI data center boom—and an opportunity for Gen Z
By Preston ForeMarch 2, 2026
10 hours ago
Two girls look at a white laptop placed on a desk.
AIEducation
American schools weren’t broken until Silicon Valley used a lie to convince them they were—now reading and math scores are plummeting
By Sasha RogelbergMarch 1, 2026
22 hours ago

Most Popular

placeholder alt text
Economy
Your grandparents are the reason the U.S. isn't in a recession right now. That won't last forever
By Eleanor PringleMarch 1, 2026
1 day ago
placeholder alt text
Success
MacKenzie Scott's close relationship with Toni Morrison long before Amazon put Scott on the path to give more than $1 billion to HBCUs
By Sasha RogelbergMarch 1, 2026
1 day ago
placeholder alt text
Middle East
U.S. military gives Iran a taste of its own medicine with cheap copycat Shahed drones, while concern shifts to munitions supply in extended conflict
By Jason MaMarch 1, 2026
22 hours ago
placeholder alt text
Middle East
As Iran attacks Dubai, the tax-free haven for the global elite could see 'catastrophic' fallout — 'this can also send shockwaves globally'
By Jason MaMarch 1, 2026
24 hours ago
placeholder alt text
AI
American schools weren’t broken until Silicon Valley used a lie to convince them they were—now reading and math scores are plummeting
By Sasha RogelbergMarch 1, 2026
22 hours ago
placeholder alt text
Health
Gen Z men are eating ‘boy kibble,’ the human equivalent to dog food, to load up on protein cheaply
By Jake AngeloMarch 1, 2026
1 day ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.