Anthropic rejects Pentagon request on AI safeguards
Artificial intelligence firm Anthropic said Thursday it will not comply with a request from the U.S. Department of Defense to loosen safeguards on its AI systems, citing concerns about mass surveillance and autonomous weapons.
In a statement, CEO Dario Amodei said the company opposes allowing its AI model, Claude, to be used for “mass domestic surveillance” or “fully autonomous weapons,” News.Az reports, citing Anadolu Agency.
He argued that advanced AI systems are not yet reliable enough to operate such weapons without human oversight and require protective measures that “don’t exist today.”
Amodei acknowledged that artificial intelligence can support national security objectives but cautioned that large-scale, AI-powered surveillance could threaten civil liberties.
RECOMMENDED STORIES
Anthropic has been in discussions with the Pentagon for several weeks. The administration of Donald Trump has reportedly considered invoking the Defense Production Act, which allows the government to require companies to prioritize national defense contracts. Officials have also weighed designating Anthropic as a “supply chain risk,” a move that would bar Defense Department contractors from using its software.
According to Axios, the Pentagon has begun steps toward that designation and has asked Boeing and Lockheed Martin to disclose their reliance on Claude.
Pentagon spokesperson Sean Parnell denied that the department intends to use AI for unlawful surveillance or fully autonomous weapons without human involvement. In a post on X, he said the Pentagon seeks to use Anthropic’s model only for “all lawful purposes” and would not allow a private company to dictate operational decisions.
By Nijat Babayev





