Yandex metrika counter
Anthropic accuses Chinese firms of stealing data from Claude
Creator: gguy | Credit: Shutterstock

As Washington tightens export controls to maintain America’s edge in artificial intelligence, top AI firm Anthropic claims that three China-based AI laboratories have found an alternative way to access advanced U.S. technologies.

The U.S. firm alleges DeepSeek, Moonshot AI and MiniMax used roughly 24,000 fraudulent accounts to generate more than 16 million exchanges with Anthropic's Claude chatbot in a coordinated "distillation" campaign designed to extract high-value model outputs, News.Az reports, citing Fox News.

The threat goes beyond ripping off U.S. companies, according to the report. Anthropic argues that models built through large-scale distillation are unlikely to retain the safety guardrails embedded in frontier U.S. systems.

"Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems—enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance," Anthropic said. 

Anthropic says it identified the campaigns using IP address correlations, request metadata and infrastructure indicators that differed sharply from normal customer traffic. The activity, the company said, was concentrated on Claude’s most advanced capabilities — including complex reasoning, coding and tool use — rather than casual consumer prompts.

"We have high confidence these labs were conducting distillation attacks at scale," Jacob Klein, Anthropic’s head of threat intelligence, told Fox News Digital.

Distillation is a common AI training technique in which a smaller or less capable model is trained on the outputs of a stronger one. Frontier labs often use it internally to create cheaper versions of their own systems. 

But Anthropic says the campaigns it uncovered were unauthorized and designed to shortcut years of research and reinforcement learning work.

Across the three operations, more than 16 million exchanges were allegedly generated over a period ranging from weeks to months, according to Klein. Anthropic intervened after detecting the activity, though he acknowledged the broader challenge is ongoing.

"There isn’t an immediate silver bullet to stop all of these," Klein said. "We view this as larger than Anthropic."

While the company cannot precisely quantify how much the Chinese labs allegedly improved their systems, Klein said the capability gains were "meaningful" and "substantial."

"What we can say with confidence is they distilled us at scale," he said.

Fox News Digital reached out to DeepSeek, Moonshot AI and MiniMax for comment and has yet to receive replies.

The report raises new questions about the effectiveness of current U.S. export controls, which largely have focused on limiting China’s access to advanced AI chips and direct transfers of model weights.

Klein argued that distillation targets a different layer of competitive advantage — the reinforcement learning process that refines and sharpens frontier models after they are trained.

"If you think about how you stay ahead in the AI race, compute is one piece of that," Klein said. "But increasingly reinforcement learning is critical. Distillation allows you to extract those capabilities."

He emphasized that advanced chips still "very much matter," but said policymakers must think about the issue "holistically."

Anthropic said it has shared its findings with relevant U.S. government entities and industry partners. Klein suggested publicly naming the labs could prompt "thoughtful government action" or at least engagement with the companies involved.

At the same time, the company said it has no evidence that the Chinese government directly coordinated the campaigns. But proxy services used to resell access to U.S. frontier AI models operate openly in China.

Washington has tried to slow China’s AI progress by limiting access to the most advanced computer chips used to train powerful systems. But Anthropic argues that even without direct access to those chips, foreign labs can still copy parts of a model’s intelligence by repeatedly querying it and training their own systems on the answers.


News.Az 

By Ulviyya Salmanli

Similar news

Archive

Prev Next
Su Mo Tu We Th Fr Sa
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31