

Anthropic has issued a serious warning that its AI model Claude was targeted in what it describes as industrial scale distillation attacks. In a blog post, the company revealed that three AI laboratories allegedly orchestrated a coordinated effort to extract Claude’s capabilities by generating more than 16 million exchanges through over 24,000 fraudulent accounts.
The labs involved were DeepSeek, Moonshot AI, and MiniMax. According to Anthropic, the activity violated its terms of service and regional access restrictions, and reflected deliberate, large scale capability extraction rather than legitimate usage.
What Is Distillation and Why It Matters
Distillation is a common technique in artificial intelligence. It allows a smaller or less powerful model to learn from the outputs of a more advanced model. Many AI companies use this method internally to build lighter and more affordable versions of their own systems.
However, Anthropic claims that in this case, distillation was used to copy Claude’s strengths without developing those capabilities independently. The focus was reportedly on agentic reasoning, coding, tool usage, and structured problem solving. By repeatedly prompting Claude in specific formats, the labs could gather high quality responses that would help train their own models faster and at lower cost.
Scale and Sophistication of the Campaigns
The three campaigns varied in size and method. DeepSeek’s operation involved more than 150,000 exchanges and focused heavily on extracting reasoning patterns. Some prompts reportedly asked Claude to explain its internal reasoning step by step, helping generate valuable training data.
Moonshot AI’s campaign was much larger, with over 3.4 million exchanges. It targeted areas such as coding, computer use agents, and data analysis. MiniMax’s operation was the largest, exceeding 13 million exchanges. Anthropic noted that MiniMax quickly shifted its traffic to newer Claude versions within 24 hours of updates, indicating close monitoring and rapid adaptation.
Anthropic said it attributed the campaigns using IP address correlation, metadata analysis, infrastructure indicators, and in some cases confirmation from industry partners.
National Security and Export Control Concerns
Beyond commercial competition, Anthropic raised national security concerns. The company emphasized that frontier AI models include safeguards designed to prevent misuse in areas such as cyber attacks, disinformation, and biological threats. Illicitly distilled models may not retain these protections.
The company also linked the issue to export controls. Anthropic has supported restrictions aimed at maintaining the United States’ technological advantage. According to the company, distillation attacks undermine these efforts by allowing foreign labs to narrow the gap through extraction rather than independent innovation.
Anthropic does not offer commercial access to Claude in China. To bypass restrictions, the company claims the labs used proxy services running large networks of fraudulent accounts. In one case, a single proxy network allegedly managed more than 20,000 accounts at the same time, blending distillation traffic with normal user activity to avoid detection.
How Anthropic Is Responding
Anthropic has strengthened its defenses by deploying advanced detection systems, behavioral fingerprinting tools, and classifiers designed to identify coordinated activity. The company is also sharing intelligence with other AI labs, cloud providers, and authorities to build a broader response framework.
In a post on X, Anthropic reiterated that the attacks involved industrial scale efforts to extract Claude’s capabilities. The company stressed that addressing such threats will require coordinated action across the AI industry, policymakers, and global stakeholders.
The episode highlights a growing challenge in the AI race. As models become more powerful and valuable, the incentive to replicate them increases. Anthropic’s disclosure signals that the battle for AI leadership is no longer only about building smarter systems, but also about protecting them.
𝐒𝐭𝐚𝐲 𝐢𝐧𝐟𝐨𝐫𝐦𝐞𝐝 𝐰𝐢𝐭𝐡 𝐨𝐮𝐫 𝐥𝐚𝐭𝐞𝐬𝐭 𝐮𝐩𝐝𝐚𝐭𝐞𝐬 𝐛𝐲 𝐣𝐨𝐢𝐧𝐢𝐧𝐠 𝐭𝐡𝐞 WhatsApp Channel now! 👈📲
𝑭𝒐𝒍𝒍𝒐𝒘 𝑶𝒖𝒓 𝑺𝒐𝒄𝒊𝒂𝒍 𝑴𝒆𝒅𝒊𝒂 𝑷𝒂𝒈𝒆𝐬 👉 Facebook, LinkedIn, Twitter, Instagram