AI-Powered Ransomware "PromptLock" Marks a Dangerous New Chapter in Cybercrime
- Cyber Jill

- Aug 28
- 3 min read
ESET researchers have uncovered what may be the first ransomware family to weaponize an open-weight large language model in real time, raising the stakes in the cat-and-mouse race between attackers and defenders.
The malware, dubbed PromptLock, is written in Golang and calls OpenAI’s recently released gpt-oss:20b model via the Ollama API. Instead of shipping with static code, PromptLock generates Lua scripts on the fly to scan files, exfiltrate sensitive data, and encrypt targets across Windows, Linux, and macOS.
“PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption,” ESET explained. The dynamic script generation means that no two infections look quite the same—an innovation that complicates traditional detection approaches.
A Proof-of-Concept With a Dark Trajectory
At present, ESET characterizes PromptLock as more of a proof-of-concept than a fully industrialized campaign. The ransomware uses the lightweight SPECK 128-bit encryption algorithm and hints at features for wiping or destroying data that are not yet functional. Still, the artifacts uploaded to VirusTotal from the U.S. on August 25, 2025, suggest active testing in the wild.
Adding to the stealth, PromptLock doesn’t require the entire model—measured in gigabytes—to be deployed locally. Instead, attackers can proxy traffic to a server running Ollama with the LLM already loaded, giving them flexible and scalable control over infections.
“PromptLock does not download the entire model… the attacker can simply establish a proxy or tunnel from the compromised network,” ESET noted.
Why This Matters: Ransomware Meets AI Unpredictability
By embedding AI into the execution chain, PromptLock illustrates a fundamental shift: ransomware can now evolve on the endpoint itself. Indicators of compromise (IoCs) are no longer predictable hash values or static code fragments—they morph with every run.
That variability echoes a wider challenge plaguing the AI industry. Despite layers of safeguards, large language models remain vulnerable to jailbreaks and prompt injections. Anthropic disclosed today that it banned accounts linked to threat actors who used its Claude model to prototype multiple ransomware variants, while new research into attacks like PROMISQROUTE demonstrates how adversaries can exploit cost-saving model-routing mechanisms to bypass safety filters.
“Prompt injection attacks can cause AIs to delete files, steal data, or make financial transactions,” Anthropic warned.
Expert Take: Prepare Without Panic
Security leaders caution against sensationalism but stress the need for AI-aware defenses.
“The rise of AI-powered ransomware is not a reason to panic or rip out defenses. It is a reminder that the fundamentals of security still matter, though they now need AI-aware adjustments,” said Dirk Schrader, VP of Security Research at Netwrix.
He emphasized that organizations practicing identity-first controls, least privilege, segmentation, behavioral detection, and strong backup hygiene are already ahead. The difference now is the need to detect behaviors that AI can uniquely produce—such as processes dynamically generating code or unusual tunneling to external AI endpoints.
“Preparation for such threats is not only technical but also cultural,” Schrader added. Developers, operators, and analysts must be trained on the safe use of AI, and organizations should rehearse incident scenarios that assume attackers will have AI in their toolkit. “The bottom line is neither panic nor complacency. AI will make ransomware evolve faster and harder to detect. Organizations that focus on identity, data minimization, and behavior-based detection, and that treat AI services as assets to control, will be in a stronger position to keep pace with this new wave.”


