Is the Pentagon allowed to surveil Americans with AI?
The ongoing public feud between the Department of Defense and the AI company Anthropic has raised a deep and still unanswered question: Does the law actually allow the US government to conduct mass surveillance on Americans?
Surprisingly, the answer is not straightforward. More than a decade after Edward Snowden exposed the NSAâs collection of bulk metadata from the phones of Americans, the US is still navigating a gap between what ordinary people think and what the law allows.
The flashpoint in the standoff between Anthropic and the government was the Pentagonâs desire to use Anthropicâs AI Claude to analyze bulk commercial data on Americans. Anthropic demanded that its AI not be used for mass domestic surveillance (or for autonomous weapons, which are machines that can kill targets without human oversight). A week after negotiations broke down, the Pentagon designated Anthropic a supply chain risk, a label typically reserved for foreign companies that pose a threat to national security.
Meanwhile, OpenAI, the rival AI company behind ChatGPT, sealed a deal that allowed the Pentagon to use its AI for âall lawful purposesââlanguage that critics say left the door open to domestic surveillance. Over the following weekend, users uninstalled ChatGPT in droves. Protesters chalked messages around OpenAIâs headquarters in San Francisco: âWhat are your redlines?â
OpenAI announced on Monday that it had reworked its deal to make sure that its AI will not be used for domestic surveillance. The company added that its services will not be used by intelligence agencies, such as the NSA.
CEO Sam Altman suggested that existing law prohibits domestic surveillance by the Department of Defense (now sometimes called the Department of War) and that OpenAIâs contract simply needed to reference this law. âThe DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,â he wrote on X. Anthropic CEO Dario Amodei argued the opposite. âTo the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI,â he wrote in a policy statement.
So, who is right? Does the law allow the Pentagon to surveil Americans using AI?
Supercharged surveillance
The answer depends on what we think counts as surveillance. âA lot of stuff that normal people would consider a search or surveillance ⊠is not actually considered a search or surveillance by the law,â says Alan Rozenshtein, a law professor at the University of Minnesota Law School. That means public informationâsuch as social media posts, surveillance camera footage, and voter registration recordsâis fair game. So is information on Americans picked up incidentally from surveillance of foreign nationals.
Most notably, the government can purchase commercial data from companies, which can include sensitive personal information like mobile location and web browsing records. In recent years, agencies from ICE and IRS to the FBI and NSA have increasingly tapped into this data marketplace, fueled by an internet economy that harvests user data for advertising. These data sets can let the government access information that might not be available without a warrant or subpoena, which are normally required to obtain sensitive personal data.
âThereâs a huge amount of information that the government can collect on Americans that is not itself regulated either by the Constitution, which is the Fourth Amendment, or statute,â says Rozenshtein. And there arenât meaningful limits on what the government can do with all this data.
Thatâs because until the last several decades, people werenât generating massive clouds of data that opened up new possibilities for surveillance. The Fourth Amendment, which protects against unreasonable search and seizure, was written when collecting information meant entering peopleâs homes.
Subsequent laws, like the Foreign Intelligence Surveillance Act of 1978 or the Electronic Communications Privacy Act of 1986, were passed when surveillance involved wiretapping phone calls and intercepting emails. The bulk of laws governing surveillance were on the books before the internet took off. We werenât generating vast trails of online data, and the government didnât have sophisticated tools to analyze the data.
Now we do, and AI supercharges what kind of surveillance can be carried out. âWhat AI can do is it can take a lot of information, none of which is by itself sensitive, and therefore none of which by itself is regulated, and it can give the government a lot of powers that the government didnât have before,â says Rozenshtein.
AI can aggregate individual pieces of information to spot patterns, draw inferences, and build detailed profiles of peopleâat massive scale. And as long as the government collects the information lawfully, it can do whatever it wants with that information, including feeding it to AI systems. âThe law has not caught up with technological reality,â says Rozenshtein.
While surveillance can raise serious privacy concerns, the Pentagon can have legitimate national security interests in collecting and analyzing data on Americans. âIn order to collect information on Americans, it has to be for a very specific subset of missions,â says Loren Voss, a former military intelligence officer at the Pentagon.
For example, a counterintelligence mission might require information about an American who is working for a foreign country, or plotting to engage in international terrorist activities. But targeted intelligence can sometimes stretch into collecting more data. âThis kind of collection does make people nervous,â says Voss.
Lawful use
OpenAI has amended its contract to say that the companyâs AI system âshall not be intentionally used for domestic surveillance of U.S. persons and nationals,â in line with relevant laws. The amendment clarifies that this prohibits âdeliberate tracking, surveillance or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.â
But the added language might not do much to override the clause that the Pentagon may use the companyâs AI system for all lawful purposes, which could include collecting and analyzing sensitive personal information. âOpenAI can say whatever it wants in its agreement ⊠but the Pentagonâs gonna use the tech for what it perceives to be lawful,â says Jessica Tillipman, a law professor at the George Washington University Law School. That could include domestic surveillance. âMost of the time, companies are not going to be able to stop the Pentagon from doing anything,â she says.
The language also leaves open questions about “inadvertent” surveillance, and the surveillance of foreign nationals or undocumented immigrants living in the US. âWhat happens when thereâs a disagreement about what the law is, or when the law changes?â says Tillipman.
OpenAI did not respond to a request for comment. The company has not publicly shared the full text of its new contract.
Beyond the contract, OpenAI says that it will impose technical safeguards to enforce its red line against surveillance, including a âsafety stack” that monitors and blocks prohibited uses. The company also says it will deploy its own employees to work with the Pentagon and remain in the loop. But itâs unclear how a safety stack would constrain the Pentagonâs use of the AI, and to what extent OpenAIâs employees would have visibility into how its AI systems are used. More important, itâs unclear whether the contract gives OpenAI the power to block a legal use of the technology.
But that might not be a bad thing. Giving an AI company power to pull the plug on its technology in the middle of government operations also carries its own risks. âYou wouldnât want the US military to ever be in a situation where they legitimately needed to take actions to protect this countryâs national security, and you had a private company turn off technology,â says Voss. But that doesnât mean there shouldnât be hard lines drawn by Congress, she says.
None of these questions are simple. They involve brutally difficult trade-offs between privacy and national security. And thatâs why perhaps they should be decided by the publicânot in backroom negotiations between the executive branch and a handful of AI companies. For now, military AI is being regulated by contracts, not legislation.
Some lawmakers are starting to weigh in. On Monday, Senator Ron Wyden of Oregon will seek bipartisan support for legislation addressing mass surveillance. He has championed bills restricting the governmentâs purchase of commercial data, including the Fourth Amendment Is Not For Sale Act, which was first introduced in 2021 but has not been passed into law. âCreating AI profiles of Americans based on that data represents a chilling expansion of mass surveillance that should not be allowed,â he said in a recent statement.
Source: Technologyreview.com
Original source: https://www.technologyreview.com/2026/03/06/1134012/is-the-pentagon-allowed-to-surveil-americans-with-ai/