OpenAI is rewriting parts of its contract with the Pentagon to prohibit the use of its artificial intelligence tools for surveillance on Americans.
This comes after the company stepped into the showdown between the Pentagon and artificial intelligence company Anthropic and a backlash against the notion that the federal government could employ AI technology to spy on Americans.
From Business Insider:
OpenAI said it is amending its contract with the Pentagon.
After public concerns that OpenAI’s new deal with the Pentagon would allow the government to use its AI for mass surveillance, CEO Sam Altman posted an internal memo to X on Monday evening, saying that the company is working with the Pentagon to “make some additions in our agreement.”
“Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of US persons and nationals,” Altman wrote on X.
“The Department also affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA). Any services to those agencies would require a follow-on modification to our contract,” Altman added.
Almost 500 OpenAI and Google employees signed an open letter supporting Anthropic’s decision to abandon a similar deal with the Pentagon instead of relaxing its bans on mass surveillance. Critics cautioned that the “all lawful purposes” language could allow the Pentagon to use OpenAI’s technology to examine commercially purchased data and social media at scale.
A QuitGPT boycott campaign on social media accused the company of giving the government a way to expand its surveillance powers. Many people protested at OpenAI’s San Francisco and London offices to pressure the company to rethink the contract.
Earn with Every Click — Join the MAGATimes Affiliate Program Today!
ARTIFICIAL INTELLIGENCE
DHS
DOJ
NATIONAL SECURITY
DEPARTMENT OF WAR
Those concerned about surveillance point to how federal agencies are already using AT to bolster their surveillance operations. The Department of Homeland Security (DHS) listed over 200 use cases in which Immigration and Customs Enforcement (ICE) used the technology to scan through tips, monitor social media, and analyze phone and location data, Tech Policy Press reported.
Civil liberties advocacy groups claim DHS has used at least 23 applications that use facial recognition, face matching, or biometric identification.
The Justice Department reported increases in how it uses “high-impact” AI tools for crime prediction, data-driven policing and monitoring communications.
From Fedscoop:
From litigation to federal prisons to criminal investigations, artificial intelligence appears to have touched nearly every corner of the Department of Justice in the past year.
Just two years ago, the DOJ reported four use cases of AI at the agency. In its most recent 2025 use case inventory, the agency logged 315 cases, a 31% increase from last year. The use cases varied widely in function, though technology and privacy experts took particular note of instances where AI was deployed at the agency for crime prediction, public surveillance, and litigation.
Of these cases, 114 were deemed “high-impact” by the agency. Under the latest guidance, high-impact AI includes models that could have “significant impacts” when deployed, including for decisions or actions with a “legal, material, binding or significant effect on rights and safety.”
Jay Stanley, a senior policy analyst with the American Civil Liberties Union’s Speech, Privacy, and Technology Project, told FedScoop that the DOJ’s 2025 inventory provides a “snapshot” of how the federal government “is aggressively seeking to test and exploit a wide variety of AI algorithms and sifting through data on ordinary people.”
This development has prompted debate over how the government employs AI as the technology becomes increasingly ubiquitous in everyday life. Companies like OpenAI and Anthropic are struggling to strike a balance between allowing their tools to be used for legitimate purposes while preventing the government from abusing them.
