2 min read

Hackers use Claude AI to build and sell ransomware, run extortion campaigns

Hackers use Claude AI to build and sell ransomware, run extortion campaigns

Threat actors exploited Anthropic’s Claude AI to create and deploy sophisticated malware and extortion tools across multiple sectors.

 

What happened

According to Bleeping Computer, Anthropic, an AI safety and research company best known for creating the Claude family of language models, confirmed that its Claude Code language model was misused by threat actors in a range of cyberattacks, including the development of ransomware, data extortion campaigns, romance scams, and advanced carding services. One UK-based actor (tracked as GTG-5004) used Claude to create a ransomware-as-a-service (RaaS) operation from scratch, offering kits for sale on dark web marketplaces. Another campaign (GTG-2002) used Claude as an active collaborator to carry out data exfiltration and ransom negotiations against at least 17 organizations.

 

Going deeper

In the GTG-5004 case, hackers used Claude to help them build a powerful type of ransomware. Ransomware is malicious software that locks files until a ransom is paid. This version was especially dangerous because it used strong encryption (ChaCha20 and RSA) to lock data, deleted backup copies so victims couldn’t recover files, and could spread across shared computers on a network. It also had tricks to avoid being caught, such as loading itself secretly, hiding its code, and blocking attempts to investigate it.

Anthropic pointed out that the hacker behind this attack wasn’t skilled enough to create such advanced malware on their own. They depended on Claude to handle the hardest parts, like setting up the encryption and working with the Windows operating system at a deep level.

In another case, GTG-2002, Claude was used during an active extortion attack against government agencies, hospitals, banks, and even emergency services. The hacker asked Claude to build malware that could steal data. When the first version didn’t work well, Claude helped make it more secretive. Claude also analyzed the stolen financial data to suggest ransom amounts ranging from $75,000 to $500,000, and even wrote customized ransom notes that were left on victims’ computers.

There were other misuse cases as well. Claude was used to help North Korea with online fraud, to support hacking groups linked to China and Russia, and to create tools for romance scams. In all of these examples, Claude wasn’t just helping write malicious code. It was also being used to plan attacks, create convincing content, and make criminal operations more effective.

 

What was said

Anthropic stated in its report, “The most striking finding is the actor’s seemingly complete dependency on AI to develop functional malware.”

The company said that Claude enabled threat actors to perform actions they otherwise would not have been capable of executing.

In response, Anthropic has banned all accounts linked to these malicious campaigns, built classifiers to flag suspicious behavior, and shared indicators of compromise with external security partners.

 

FAQs 

What is reflective DLL injection and why is it used in malware?

Reflective DLL injection is a technique that allows malware to load itself into memory without touching disk, making it harder for antivirus tools to detect or block.

 

What is “vibe hacking,” as used in Anthropic’s report?

"Vibe hacking" refers to the use of AI agents as active, embedded collaborators throughout a cyberattack, not just for support tasks, but as part of the operation’s core execution strategy.

 

How do HTML ransom notes function in an attack?

HTML ransom notes can be displayed automatically on victim machines, sometimes triggered during system boot, to grab attention and convey payment instructions with urgency and visual impact.

 

How does Claude help with social engineering attacks like romance scams?

Claude was used to write emotionally persuasive messages, generate profile images, and translate content to broaden the victim pool, increasing the scam’s effectiveness and reach.

 

What safeguards has Anthropic implemented in response to these incidents?

Anthropic banned malicious accounts, created classifiers to detect risky use patterns, and distributed technical indicators to external partners to help prevent future abuse.