loader image
Anthropic MiniMax AI lab: engineers by holographic globe, racks of servers, analysts at work, and a secure glass vestibule.
Anthropic Says MiniMax, DeepSeek Distilled Claude

Anthropic says MiniMax, along with DeepSeek and Moonshot AI, orchestrated extensive distillation attacks on its Claude models, involving over 24,000 fake accounts and generating 16 million exchanges. The San Francisco-based AI firm detailed how these Chinese labs employed proxy services to engage in massive data extractions, allegedly refining their own AI with capabilities stolen from Claude. The operations bypassed regional restrictions by leveraging third-party commercial proxies, complicating detection. Distillation attacks dangerously transfer advanced AI functionalities more rapidly and cheaply, bypassing the typical safety safeguards. Anthropic’s findings, supported by IP analysis and request metadata, highlighted attempts to mask illicit activities through synchronized account networks. Concerned about the potential misuse in military or surveillance applications, Anthropic has intensified its security measures. It urges collective industry and policy action against such maneuvers. For comprehensive insights, explore the full article at the provided link.

Anthropic Claude Under Large Scale Distillation Attacks By Chinese AI Labs with 13 Million Exchanges

Write a Reply or Comment

Your email address will not be published. Required fields are marked *