Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI
Anthropic reports 24,000 fraudulent accounts and 16M exchanges in alleged industrial-scale training campaign.
Anthropic has formally accused three prominent Chinese AI companies—DeepSeek, MiniMax, and Moonshot—of conducting industrial-scale campaigns to illicitly 'distill' its Claude AI model. According to Anthropic's announcement, the operation involved creating approximately 24,000 fraudulent accounts and engaging in more than 16 million exchanges with Claude to extract its capabilities. The company detailed that DeepSeek alone conducted over 150,000 exchanges specifically targeting Claude's reasoning architecture while also using the model to generate 'censorship-safe alternatives to politically sensitive questions about dissidents, party leaders, or authoritarianism.'
Distillation is a legitimate AI training technique where a smaller, more efficient model learns from a larger, more advanced one. However, Anthropic contends this campaign crossed into illicit territory due to its scale, deceptive account creation, and potential bypassing of Claude's built-in safety and ethical guardrails. MiniMax and Moonshot were responsible for 3.4 million and 13 million exchanges respectively. This follows similar accusations from OpenAI, which last week told lawmakers that DeepSeek was engaged in 'ongoing efforts to free-ride on the capabilities developed by OpenAI and other U.S. frontier labs.'
The core concern, as articulated by Anthropic, is national security. The company warns that foreign labs distilling American models can feed 'unprotected capabilities into military, intelligence, and surveillance systems,' potentially enabling 'authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance.' In response, Anthropic is calling on the broader AI industry, cloud providers, and policymakers to address the issue, suggesting that 'restricted chip access' could limit the scale of such operations. This incident highlights the growing geopolitical tensions in AI development and the challenges of protecting intellectual property in an open-API ecosystem.
- Anthropic reports 24,000 fraudulent accounts and 16+ million exchanges in alleged model distillation campaign by Chinese firms.
- DeepSeek specifically targeted Claude's reasoning capabilities and generated censorship-safe political content, per the allegations.
- Anthropic warns distilled models lack safety guardrails and could empower foreign military and surveillance systems.
Why It Matters
This escalates AI as a national security issue, challenging how open AI research can be protected from state-level intellectual property transfer.