Widespread Capability Extraction
A prominent artificial intelligence developer has publicly accused three AI research organizations – DeepSeek, Moonshot AI, and MiniMax – of orchestrating
a massive, systematic effort to pilfer proprietary functionalities from its advanced AI models. Anthropic, the company making the allegations, characterizes these actions as industrial-scale campaigns designed to unlawfully obtain the capabilities of its Claude AI systems. The company asserts that this alleged theft poses a significant and escalating danger to the field of AI safety and could also have broader implications for national security. According to Anthropic, these three entities collectively engaged in over 16 million interactions with Claude, utilizing approximately 24,000 fabricated user accounts. This extensive engagement, Anthropic claims, represents a clear violation of its service agreements and circumvents its regional access restrictions. The primary objective behind these alleged activities, as stated by Anthropic, was 'distillation,' a recognized AI training methodology. This process allows a less complex or smaller AI model to learn and replicate the advanced outputs generated by a more powerful system. While distillation itself is a legitimate and common practice for creating more efficient AI models, Anthropic contends that its misuse in this context facilitates the rapid and cost-effective acquisition of cutting-edge AI abilities without the substantial investment in independent research and development.
The Perils of Distillation
The technique of distillation, while a cornerstone of legitimate AI development for creating more streamlined and accessible models, carries significant risks when employed illicitly. Anthropic highlights that competitors can leverage this method to replicate advanced AI capabilities in a fraction of the time and financial resources typically required for independent innovation. This raises critical concerns, particularly regarding the integrity of built-in safety mechanisms. Models that are illicitly distilled might not inherit the crucial safeguards designed to prevent malicious use, such as those protecting against the creation or dissemination of bioweapons or the facilitation of cyberattacks. The integration of such compromised capabilities into military, intelligence, or surveillance systems could exponentially magnify existing risks, a danger amplified further if these models were to be released into the open-source community, potentially enabling widespread misuse.
Export Controls and Security
Anthropic's allegations extend to the impact of this alleged capability extraction on international export controls and national security. The company argues that such large-scale distillation undermines the intended effectiveness of United States export regulations. By allowing foreign AI labs to acquire advanced capabilities through unauthorized extraction rather than fostering their own independent research and development, these actions effectively shrink the technological gap with leading American AI firms. Anthropic further cautions that this can create a deceptive impression that export controls are failing, when in reality, some of these perceived advances may be heavily reliant on capabilities siphoned from U.S.-based frontier models. Moreover, the ability to conduct distillation at scale often necessitates access to advanced AI processing hardware, implying that these illicit operations are not only dependent on stolen intellectual property but also on access to critical technological infrastructure.
Investigative Findings
According to Anthropic's investigation, the three campaigns—attributed to DeepSeek, Moonshot AI, and MiniMax—followed a remarkably similar operational blueprint. This playbook reportedly involved the extensive use of proxy services to mask originating IP addresses, the creation of numerous fraudulent accounts to obfuscate activity, and the deployment of coordinated prompting strategies. These tactics were allegedly employed to evade detection while systematically extracting high-value AI functionalities. The extracted capabilities reportedly include complex agentic reasoning, the ability for AI to utilize external tools, and advanced coding proficiency. Anthropic states it has attributed these campaigns with a high degree of certainty, employing methods such as correlating IP addresses, analyzing request metadata, identifying shared infrastructure indicators, and, in some instances, corroborating findings with information shared by industry partners.
Targeted Allegations
Digging deeper into the specific accusations, Anthropic detailed the alleged activities of each laboratory. DeepSeek's campaign is said to have involved over 150,000 exchanges, with a focus on extracting reasoning behaviors and attempting to generate responses that bypassed political sensitivities or censorship. Moonshot AI, according to Anthropic, conducted an even larger operation, with more than 3.4 million exchanges targeting critical areas like coding, agentic reasoning, tool usage, and computer vision, including the development of agents capable of interacting with computer systems. MiniMax is linked to the most extensive alleged activity, comprising over 13 million exchanges focused on agentic coding and the orchestration of tools. Anthropic noted that MiniMax was observed rapidly rerouting traffic to a newly released Claude model during an ongoing campaign, indicating an agile response to new developments.
Anthropic's Response Strategy
In response to these serious allegations, Anthropic is implementing a multi-faceted strategy to bolster its defenses and combat future instances of unauthorized capability extraction. The company is actively enhancing its detection systems, refining behavioral fingerprinting techniques, and intensifying its monitoring of coordinated account activity. Furthermore, Anthropic is strengthening its verification processes for account pathways that it identifies as frequently exploited. Recognizing the systemic nature of this threat, Anthropic is committed to sharing crucial technical indicators with other AI firms, cloud service providers, and relevant governmental authorities. Concurrently, it is developing a suite of countermeasures at the product, API, and model levels to prevent such breaches. Anthropic emphasizes that this challenge is too significant for any single entity to resolve independently and calls for a unified, collaborative effort involving both industry players and policymakers to effectively address this growing concern.














