On Friday, just hours after publicly backing rival Anthropic for standing firm against the Pentagon’s demands, OpenAI CEO Sam Altman announced his company had struck its own deal with the Department of Defense. The move came shortly after the U.S. government had taken the highly unusual step of designating Anthropic a “supply-chain risk.”
OpenAI’s decision drew criticism from many AI researchers and tech policy experts, even though OpenAI said it had achieved limitations in its agreement around surveillance of U.S. citizens and lethal autonomous weapons that Anthropic wanted in its contract but which the Pentagon had refused.
One of the key points of contention was over domestic mass surveillance. Experts have long warned that advanced AI is capable of taking scattered, individually innocuous data—like a person’s location, finances, search history—and assembling it into a comprehensive picture of any person’s life, automatically and at scale. Anthropic CEO Dario Amodei has said that this kind of AI-driven mass surveillance presents serious and novel risks to people’s “fundamental liberties” and that “the law has not yet caught up with the rapidly growing capabilities of AI.”
But while OpenAI said in a blog post it had reached a deal with the Pentagon that its technology would not be used for mass domestic surveillance or direct autonomous weapons systems, the two hard limits that Anthropic had refused to drop, some legal and policy experts have raised questions about a potential gap in the law.
Part of the dispute hinges on the murky legality of large-scale analysis of Americans’ data that is lawful under current U.S. statutes, even if it feels indistinguishable from mass surveillance.
“Right now, under U.S. law, it’s lawful for government authorities to buy up commercially available information from data brokers and other third parties,” said Samir Jain, the vice president of policy at the Center for Democracy & Technology. “If you buy up massive amounts of data and allow AI to analyze it, you may end up, in effect, engaging in mass surveillance of Americans through that process. It’s not currently restricted by law or prohibited by law.”
OpenAI says its “redlines” are enforced through technical systems it plans to build as well as through language in its contract with the Pentagon. According to a blog released by the company, the contract permits the Department of Defense to use the AI “for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols,” while explicitly prohibiting unconstrained monitoring of Americans’ private information.
The problem is that what counts as “lawful” can change. OpenAI’s contract points to existing laws and Department of Defense policies, but those policies could be modified in the future. “Nothing in what they’ve released would prevent those policies from being changed going forward,” Jain said.
Some critics argue that existing intelligence authorities already allow forms of surveillance that OpenAI says it prohibits. Mike Masnick, founder of the Techdirt blog, wrote on social media that the agreement “absolutely does allow for domestic surveillance,” pointing to Executive Order 12333, a long-standing authority that permits intelligence agencies to collect communications outside the United States, which can include Americans’ data when it is incidentally acquired.
Some of the debate centers around specific portions of U.S. law that govern different national security activities. The U.S. military’s actions are generally governed by Title 10 of the U.S. Federal Code. This includes work the Defense Intelligence Agency and the U.S. Cyber Command performs to support military operations. But some of the DIA’s work comes under a different portion of U.S. law, Title 50 of the U.S. Code, which generally governs covert intelligence gathering and covert action. The work of the Central Intelligence Agency and National Security Agency generally fall under Title 50, too. Some of the most sensitive Title 50 activities, especially covert actions, are conducted largely behind the scenes and require a presidential finding.
In a blog post published over the weekend, OpenAI shared a detailed account of its agreement with the Pentagon and, according to a post on social media by a well-known OpenAI researcher Noam Brown, the company’s head of national security partnerships, Katrina Mulligan, told Brown that OpenAI’s contract does not cover Title 50 work by the intelligence community, one of the major causes of concern from critics. Representatives for OpenAI did not immediately respond to a request for comment from Fortune.
But legal scholars have noted that the distinction between Title 10 and Title 50 activities is increasingly blurry. In practice, the two can look very similar, and both can involve analyzing data about foreign actors or tracking patterns. But that overlap creates a gray area for companies like OpenAI: A contract that bans Title 50 work doesn’t automatically prevent Title 10 agencies like the DIA from using AI to analyze commercially available or unclassified datasets.
“If they’re saying that their system can’t be used for any Title 50 activities, then that reduces the scope of activities for which the AI system can be used,” Jain said. “But that doesn’t solve the problem.”











