Yandex metrika counter
Anthropic-Pentagon talks stall over AI limits
Source: Bloomberg

Anthropic PBC’s discussions to extend its contract with the Pentagon have hit a snag over proposed safeguards the artificial intelligence company wants to impose on its Claude system, according to a person familiar with the matter.

Anthropic is seeking to add guardrails that would prevent Claude from being used for mass surveillance of Americans or for developing weapons that can operate without human oversight, the person said, requesting anonymity because the negotiations are private, News.Az reports, citing Bloomberg.

The Pentagon, however, wants flexibility to use Claude provided its deployment complies with the law.

The debate highlights growing concerns around the use of advanced AI systems in military and intelligence settings, particularly in weapons development and large-scale data collection. Anthropic, which positions itself as a safety-focused AI company aiming to prevent catastrophic harms, created Claude Gov specifically for U.S. national security agencies and intends to serve government clients within its ethical framework.

Claude Gov is designed with enhanced capabilities for handling classified materials, interpreting intelligence data and analyzing cybersecurity information.

“Anthropic is committed to using frontier AI in support of US national security,” a company spokesperson said, adding that the firm is engaged in “productive conversations, in good faith” with the Defense Department “on how to continue that work and get these complex issues right.”

Pentagon spokesman Sean Parnell confirmed that the relationship is under review. “The Department of War’s relationship with Anthropic is being reviewed,” he said in an emailed statement. “Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people.”

Some defense officials reportedly view Anthropic as a potential supply-chain risk, and the Defense Department may require vendors to certify that they do not rely on Anthropic’s models, according to a senior defense official.

The disagreement could create opportunities for rival AI firms. According to the official, other companies — including OpenAI with ChatGPT, Alphabet’s Google Gemini and xAI’s Grok — have been working with the Pentagon to ensure their platforms can be used lawfully.

Anthropic secured a two-year agreement with the Pentagon last year involving a prototype of Claude Gov models and Claude for Enterprise. The outcome of the current negotiations could influence future discussions with OpenAI, Google and xAI, whose systems are not yet used for classified Pentagon work, Axios reported, citing unnamed sources.

An OpenAI spokesperson declined to comment, referring instead to a company blog post about its GenAI.mil tool designed for the Pentagon and other democratic governments. Google did not respond to a request for comment, and a representative for X, the parent company of xAI, said there was no immediate statement.


News.Az 

By Nijat Babayev

Similar news

Archive

Prev Next
Su Mo Tu We Th Fr Sa
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31