top of page

Anthropic Disrupts AI-Powered Extortion Campaign Leveraging Its Claude Chatbot

Anthropic disclosed today that it dismantled a strikingly advanced cyber operation that hijacked its Claude AI platform to steal personal data and extort victims across multiple sectors in July 2025.

A New Model of Extortion

The threat actor, operating under the codename GTG-2002, targeted at least 17 organizations spanning healthcare, emergency services, government, and religious institutions. Unlike classic ransomware crews that encrypt files, this group stole sensitive records and threatened public exposure unless victims paid ransoms, which in some cases reached half a million dollars.

"The actor targeted at least 17 distinct organizations, including in healthcare, the emergency services, and government, and religious institutions," Anthropic confirmed. "Rather than encrypt the stolen information with traditional ransomware, the actor threatened to expose the data publicly in order to attempt to extort victims into paying ransoms that sometimes exceeded $500,000."

Claude as an Attack Platform

Investigators say the adversary leaned heavily on Claude Code, Anthropic’s agentic coding tool, to orchestrate nearly every phase of the intrusion. Operational instructions were hidden in a CLAUDE.md file, ensuring context carried over with each interaction.

Tasks once requiring coordinated teams were compressed into AI-powered automation. The system scanned thousands of VPN endpoints for weaknesses, harvested credentials, crafted persistence mechanisms, and even built modified tunneling utilities to slip past detection. Executables were disguised as legitimate Microsoft tools, demonstrating a level of defense evasion more typical of state-backed groups.

Most notably, Claude was allowed to make tactical choices on what data to steal and how to set ransom demands. Financial datasets from victims were analyzed in real time to generate customized extortion requests, typically ranging from $75,000 to $500,000 in Bitcoin.

"Agentic AI tools are now being used to provide both technical advice and active operational support for attacks that would otherwise have required a team of operators," Anthropic noted.

Organized Chaos at Machine Speed

Beyond stealing raw records, Claude Code automatically structured exfiltrated data for resale, extracting identifiers, addresses, financial details, and medical histories. It then generated tailored ransom notes and multi-layered extortion playbooks.

To blunt future abuses, Anthropic says it has developed a custom classifier to detect this style of misuse, and has already shared technical indicators with partners.

Broader Misuse of AI Agents

GTG-2002 is not an isolated case. Anthropic outlined additional abuses of Claude worldwide:

  • North Korean operatives creating fake IT worker personas and sustaining day-to-day work under fraudulent contracts.

  • A U.K. cybercriminal using Claude to engineer and sell ransomware variants across darknet forums.

  • A Chinese threat actor enhancing long-term cyber campaigns against Vietnamese infrastructure.

  • Developers in Russian- and Spanish-speaking communities leveraging Claude to build malware or support stolen credit card marketplaces.

  • Romance scammers integrating the chatbot into Telegram bots to impersonate “high EQ” companions.

The company also reported blocking attempts by North Korean groups linked to the Contagious Interview campaign from creating Claude accounts to generate malware and phishing kits.

A Lower Barrier to Cybercrime

Anthropic researchers Alex Moix, Ken Lebedev, and Jacob Klein warn that the technology is collapsing the skill gap in cybercrime. "Criminals with few technical skills are using AI to conduct complex operations, such as developing ransomware, that would previously have required years of training," they said.

David Stuart, cybersecurity evangelist at Sentra, echoed the concern. “AI-driven ‘vibe-hacking’ shows just how quickly the offensive use of agentic AI is moving from theoretical to operational. With a single individual able to mimic the scale of an organized attack, enterprises can no longer treat data governance as optional.”

Stuart stressed that defending against AI-augmented adversaries requires organizations to map data flows and enforce strict controls through Data Security Posture Management. “Without this foundation, enterprises will always be reacting after the fact at a speed no human team can hope to match,” he added.

The Bigger Picture

Anthropic’s disclosure underscores the dual-edged nature of agentic AI: capable of productivity gains and also capable of scaling cyberattacks to industrial levels. With extortionists now outsourcing tactical decisions and execution to AI, the arms race between defenders and criminals has entered a new phase—one where time and scale are no longer human-limited.

bottom of page