Anthropic is accusing DeepSeek, Moonshot AI (Kimi) and MiniMax of setting up more than 24,000 fraudulent Claude accounts, and distilling training information from 16 million exchanges.
Anthropic alleges DeepSeek, Moonshot, and MiniMax siphoned training data from 16 million Claude conversations.
Anthropic has filed a formal complaint alleging a coordinated data extraction campaign by three prominent Chinese AI companies. According to a report by The Wall Street Journal, Anthropic accuses DeepSeek, Moonshot AI (creator of the Kimi chatbot), and MiniMax of creating more than 24,000 fraudulent user accounts to access its Claude AI assistant. Through these accounts, the companies allegedly orchestrated the distillation of training information from an estimated 16 million user exchanges with Claude.
The technical method described involves 'model distillation,' a process where a smaller, student model learns to mimic the outputs of a larger, teacher model (like Claude) by analyzing vast amounts of its responses. By automating queries through thousands of fake accounts, the accused firms could systematically harvest Claude's reasoning patterns, knowledge, and stylistic outputs. This data is invaluable for rapidly improving competing models without the immense computational cost of training from scratch.
This accusation arrives amid intense global competition in foundation model development and rising concerns over AI espionage and intellectual property protection. For AI labs, their proprietary training data and the refined outputs of their models represent a core competitive advantage. The scale of the alleged operation—tens of thousands of accounts and millions of conversations—suggests a significant, organized effort to acquire this advantage. It raises immediate questions about the security measures surrounding API access and user verification for major AI services.
The implications are substantial for the AI industry's legal and competitive landscape. If proven, this could lead to significant legal battles over IP theft and terms-of-service violations, potentially resulting in injunctions or damages. It also forces all AI companies to re-evaluate their guardrails against automated data scraping, possibly leading to stricter access controls that could impact legitimate developers and researchers. Furthermore, this incident underscores the geopolitical dimensions of the AI race, where national interests and corporate competition are increasingly intertwined.
- Anthropic alleges DeepSeek, Moonshot AI (Kimi), and MiniMax created over 24,000 fake Claude accounts for data extraction.
- The companies are accused of distilling training information from approximately 16 million Claude user exchanges.
- The method likely involved automated querying for model distillation, copying Claude's capabilities at a fraction of the training cost.
Why It Matters
Sets a precedent for AI IP theft cases and could lead to stricter, more restrictive API access for all developers globally.