The U.S. artificial intelligence company Anthropic has announced that it identified what it described as “industrial-scale” campaigns carried out by three China-based AI laboratories — DeepSeek, Moonshot AI, and MiniMax — to extract capabilities from its advanced model Claude through a technique known as distillation (model distillation).
According to the official statement published by Anthropic, these campaigns involved the creation of approximately 24,000 fraudulent accounts that generated more than 16 million interactions with the Claude model, allegedly to train or improve their own models by reproducing Claude’s outputs without authorization or compliance with the terms of service.
Distillation, a common technique in machine learning — where a smaller model learns from the outputs of a larger one — becomes a matter of controversy here: although it is legitimate when used internally for model compression, Anthropic argues that the way it was employed in these campaigns violated its usage policies and may constitute an attempt to replicate advanced capabilities without assuming the safety and responsibility commitments associated with the original versions of the model.
Anthropic maintains that these practices pose broader risks to the AI industry, especially if models distilled in an unregulated manner lack security mechanisms that prevent malicious uses, such as the automation of cyberattacks, large-scale disinformation generation, or applications in sensitive areas such as biotechnology.
The company’s response has been to strengthen its systems for detecting fraudulent patterns, share intelligence with other AI developers, and implement stricter access controls, in a call for cooperation across the sector and among policymakers to address this type of practice. “Distillation attacks at this scale require a coordinated response among industry actors, cloud providers, and lawmakers,” Anthropic stated.
This case arises within a broader context of tensions in the global technological race. OpenAI and Google have issued similar warnings against attempts to extract capabilities from their models, and some analysts have linked these accusations to debates over advanced chip export controls and the governance of strategic technologies.
Meanwhile, DeepSeek, Moonshot AI, and MiniMax have not issued public official statements clarifying their positions on the matter, and the situation reflects the complexity and scale of the challenges faced by companies developing advanced artificial intelligence in a globally competitive environment.
Sources
Computing UK. (2026). Chinese firms used distillation to copy Claude.
Anthropic. (2026). Detecting and Preventing Distillation Attacks.
Reuters. (2026). Chinese companies used Claude to improve own models, Anthropic says.
1 comment
Comments are closed.