
Chinese AI Labs Burned 24,000 Fake Accounts Mining Claude's Secrets
What happens when you can't buy the hardware but desperately need the intelligence?
You steal it. At scale. With 24,000 fraudulent accounts.
Anthropic just exposed the most brazen AI intellectual property theft in history. Three Chinese labs—DeepSeek, Moonshot AI, and MiniMax—orchestrated a sophisticated distillation campaign that generated over 16 million exchanges with Claude, effectively teaching their own models to think like Anthropic's flagship AI.
This isn't your typical API abuse. This is industrial espionage disguised as machine learning.
The Anatomy of Digital Piracy
The scale is staggering. MiniMax alone accounted for 13 million exchanges, redirecting nearly half its traffic to Claude whenever new models launched. Moonshot generated 3.4 million exchanges. Even DeepSeek, with its modest 150,000 exchanges, laser-focused on the crown jewels: reasoning chains and politically sensitive query alternatives.
<> Jacob Klein, Anthropic's head of threat intelligence, expressed "high confidence" in linking these campaigns to the labs via detection methods including IP address correlation and shared payment methods./>
They weren't subtle about it either. Commercial proxy services managed up to 20,000 accounts simultaneously. Synchronized traffic patterns. Shared infrastructure. It's like watching someone rob a bank while wearing a name tag.
But here's what makes this fascinating: it worked.
The Export Control Workaround
While U.S. officials debate AI chip export controls, Chinese labs found their shortcut. Why spend billions on hardware and training when you can distill Claude's capabilities for the cost of API calls?
DeepSeek's upcoming V4 model reportedly outperforms both Claude and ChatGPT in coding tasks. Moonshot launched open-source Kimi K2.5 last month. These aren't coincidences—they're the fruits of systematic knowledge extraction.
The irony cuts deep. America restricts chip exports to slow China's AI progress, yet our own companies inadvertently become the training ground for their competitors.
<> Dmitri Alperovitch, CrowdStrike co-founder and chairman of Silverado Policy Accelerator, stated he's "not surprised" by these attacks./>
Of course he isn't. This was inevitable.
The Distillation Dilemma
Here's where it gets murky. Distillation itself isn't illegal—it's a standard ML technique. The problem lies in:
- Scale and deception: 24,000 fake accounts cross ethical lines
- Terms of service violations: Anthropic explicitly bans commercial access from China
- Safety strip-mining: Stolen models lose built-in safeguards against harmful outputs
- Competitive advantage theft: Years of research compressed into months of API farming
Anthropic has no evidence of direct Chinese government coordination, but does it matter? The labs operated openly through Chinese proxy services, suggesting either incredible boldness or institutional backing.
Hot Take
This changes nothing and everything.
Nothing, because it exposes what everyone suspected: determined actors will always find ways around restrictions. Export controls on chips? Meet export controls on intelligence itself.
Everything, because it forces a reckoning. If Claude can be distilled with 16 million exchanges, what does that say about AI moats? Are we protecting models or just delaying the inevitable?
The real winner here might be open source. When proprietary models become distillation targets, why not release them openly and compete on deployment, optimization, and services instead?
Anthropic calls for "coordinated industry response," but coordination requires trust. In a world where your API customers might be your biggest competitors, that trust is expensive.
The cat's out of the bag. The only question now is whether American AI companies will adapt or keep building higher walls around gardens that are already being harvested.
