![]() |
| Anthropic Claude AI's capabilities. |
In a revelation that has sent shockwaves through the artificial intelligence community, Anthropic has exposed an "industrial-scale" operation by three of China's leading AI laboratories to systematically dismantle and steal the core functionality of its Claude AI model. The scheme, involving a staggering 16 million queries through 24,000 fraudulent accounts, represents the most significant case of AI intellectual property theft to date .
The accused firms—DeepSeek, which recently stunned Silicon Valley with its efficiency breakthroughs, MiniMax, and Moonshot AI (creator of the popular Kimi assistant)—stand accused of orchestrating a sophisticated distributed network designed to vacuum up Claude's decision-making capabilities while actively working to erase the safety guardrails that prevent AI from being weaponized for censorship or surveillance .
The Anatomy of a Digital Heist
The technique at the center of this controversy is "model distillation"—a process Anthropic itself describes as potentially legitimate when used properly . Much like an art student copying the masters to learn technique, AI developers can use the outputs of powerful models to train smaller, more efficient ones. However, what Anthropic uncovered goes far beyond academic imitation.
Using what the company terms a "hydra cluster" architecture, the alleged perpetrators deployed a coordinated network of proxy services that masked their true identities and locations . By routing their queries through thousands of accounts that mixed malicious traffic with legitimate user requests, they attempted to hide their operation in plain sight.
DeepSeek conducted over 150,000 exchanges with Claude, but their focus was uniquely sinister. According to Anthropic's investigation, DeepSeek specifically engineered prompts to extract Claude's step-by-step reasoning processes—essentially trying to steal not just the answers, but the underlying thought patterns that make Claude one of the world's most sophisticated AI systems . Even more troubling, they allegedly generated "censorship-safe alternatives" to politically sensitive queries, attempting to train their models to identify and circumvent topics the Chinese government would rather avoid .
Moonshot AI cast a wider net with 3.4 million interactions . Their targets included agentic reasoning, complex computer operations, and advanced coding capabilities. Anthropic's investigators noted that Moonshot employed hundreds of fraudulent accounts across multiple access points, making detection significantly more challenging .
But the most brazen operation belonged to MiniMax. With more than 13 million exchanges—nearly the combined total of the other two firms—MiniMax demonstrated what Anthropic describes as "breathtaking" organizational sophistication . When Anthropic released a major Claude update, MiniMax redirected nearly half of its query traffic within 24 hours to capture the new capabilities . This wasn't casual experimentation; this was an industrial process designed to clone Claude's capabilities in near real-time.
The Pentagon Paradox: Safety vs. Warfare
These accusations land at a particularly awkward moment for Anthropic, which finds itself caught between its self-proclaimed safety principles and the demands of an increasingly impatient U.S. military establishment.
Just days before the distillation report, Defense Secretary Pete Hegseth issued an ultimatum to Anthropic CEO Dario Amodei: allow unrestricted military use of Claude technology by Friday evening, or risk losing government contracts and facing intervention under the Defense Production Act . The Pentagon wants access to cutting-edge AI without what Hegseth derisively calls "woke" constraints that limit "lawful military applications" .
Anthropic has drawn what it considers non-negotiable red lines: no autonomous kinetic operations where AI makes final targeting decisions without human intervention, and no mass domestic surveillance of American citizens . These principled stands, while admirable, place the company at odds with an administration that has made clear it expects full cooperation from its technology partners.
The irony is palpable. While Anthropic fights to prevent its technology from being used in American autonomous weapons systems, it warns that the distilled versions of Claude created by Chinese firms will have no such safeguards . "Foreign labs that illicitly distill American models can remove safeguards, feeding model capabilities into their own military, intelligence, and surveillance systems," Anthropic stated in its report . The implication is clear: a U.S. company's efforts to prevent AI from becoming a killing machine may be rendered moot by Chinese competitors who have no such ethical qualms.
"How Dare They Steal What Was Stolen?"
Perhaps the most stinging response to Anthropic's accusations came from an unlikely source: Elon Musk. The tech billionaire, never one to mince words, took to X to deliver a blistering counterpunch: "How dare they steal the stuff Anthropic stole from human coders?" .
Musk's critique points to a uncomfortable reality in the AI industry. Anthropic itself has faced multiple lawsuits from authors and publishers alleging massive copyright infringement. The company reportedly settled a class-action lawsuit for $15 billion over claims it used pirated books and copyrighted materials to train its models . Internal documents from a project codenamed "Panama Project" allegedly described plans to "destructively scan all the books in the world," purchasing physical books, slicing off their spines with hydraulic cutters, and digitizing them before sending the physical copies to recycling—all while hoping "the outside world" wouldn't find out .
This "pot calling the kettle black" narrative has gained significant traction on social media. Critics argue that if training on the entirety of human written expression without permission is innovation, but training on AI outputs becomes theft, the industry has created a convenient double standard that benefits incumbent players at the expense of newcomers.
The Technical Arms Race
Behind the legal and ethical debates lies a fascinating technical cat-and-mouse game. Anthropic has now deployed sophisticated detection systems including behavioral fingerprinting and specialized classifiers designed to identify distillation patterns in API traffic . These systems look for telltale signs: unusually high query volumes, hyper-focused capability extraction, and repetitive prompt structures that don't resemble normal human interaction.
The company is also strengthening access controls for educational and research accounts—pathways that are "often exploited to create fraudulent accounts" . And they're developing countermeasures at the product, API, and model level to make their outputs less useful for illicit training without harming legitimate users.
But Anthropic admits it cannot fight this battle alone. The company is actively rallying industry partners including OpenAI and Google to form a united front . They're sharing technical indicators of distillation attacks and pushing for coordinated action that spans both private industry and government policy.
Policy Implications: Beyond GPU Export Controls
The distillation scandal has reignited debates about the effectiveness of current U.S. export control strategies. To date, policy has focused heavily on hardware—restricting shipments of advanced NVIDIA GPUs and other critical components needed to train frontier models from scratch .
But distillation attacks represent an end-run around these controls. If a Chinese lab can simply query American models millions of times and use those responses to train their own systems, they can achieve frontier capabilities without ever needing the banned hardware. The recent relaxation of some chip export restrictions—allowing NVIDIA H200 sales with a 25% tariff and case-by-case review—has critics warning that the U.S. is effectively "fueling China's AI ambitions" .
Anthropic is now pushing for anti-distillation measures to be incorporated into export control legislation, arguing that protecting American AI intellectual property requires just as much attention as protecting semiconductor supply chains .
The Road Ahead
As of this writing, none of the accused Chinese companies—DeepSeek, MiniMax, or Moonshot AI—have responded to requests for comment . The industry now watches to see whether this will escalate into formal trade disputes, legal action, or simply become accepted as the new normal in an increasingly cutthroat global AI race.
What's clear is that the era of open trust in AI development is over. As models become more powerful and economically valuable, the incentives for industrial espionage grow proportionally. Anthropic's detailed exposé may represent the opening salvo in what becomes a sustained war over AI intellectual property—a war fought not with guns, but with queries, proxies, and terabytes of training data.
For now, Anthropic has drawn its line in the sand. Whether other American AI companies join them, and whether policymakers provide the legal and regulatory backup they're requesting, will determine whether this stands as a moment of effective industry self-defense or merely the first battle in a long and costly war.
Get the NVD RTX PRO 6000 Blackwell Professional Edition AI GPU on Amazon to power your own AI experiments—responsibly, of course.
Source : Anthropic
