Rapid Read    •   7 min read

Researchers Warn of AI-Powered Ransomware 'PromptLock' Exploiting Large Language Models

WHAT'S THE STORY?

What's Happening?

Cybersecurity researchers at ESET have flagged a new AI-powered ransomware named 'PromptLock' that functions as a hard-coded prompt injection attack on large language models. This malware uses the Ollama API to interface with OpenAI's gpt-oss:20b model, executing tasks such as inspecting local filesystems, exfiltrating files, and encrypting data across Windows, Mac, and Linux devices. The ransomware employs SPECK 128-bit encryption and is believed to be a proof-of-concept, as its destructive features are not fully implemented. ESET discovered the code on VirusTotal and noted that attackers can establish a proxy or tunnel to a server running the AI model, bypassing the need to deploy the entire model within compromised networks.
AD

Why It's Important?

The emergence of 'PromptLock' underscores the vulnerabilities associated with deploying AI systems in network environments. As AI agents require high-level administrative access, they become susceptible to prompt injection attacks, which can be exploited to perform ransomware activities. This development raises concerns about the security of AI systems and the potential for them to be turned against their owners. The ability of 'PromptLock' to generate varying indicators of compromise complicates detection efforts, posing a significant challenge to cybersecurity defenses. Organizations must prioritize securing AI systems to prevent such attacks.

What's Next?

While 'PromptLock' is currently a proof-of-concept, its existence signals the need for heightened awareness and preparedness within the cybersecurity community. Researchers and organizations must collaborate to develop strategies to detect and mitigate AI-powered ransomware threats. As AI technology continues to advance, ongoing research and innovation in cybersecurity will be crucial to safeguarding against emerging threats. Organizations may need to implement stricter security measures and conduct regular assessments to ensure their AI systems are protected from exploitation.

AI Generated Content

AD
More Stories You Might Enjoy