2 min read
Researchers discover earliest GPT-4-enabled malware tool
Farah Amod
Oct 6, 2025 10:37:38 AM

Security experts have uncovered a new class of malware powered by GPT-4, raising concerns over how AI is accelerating cybercriminal capabilities.
What happened
Cybersecurity researchers from SentinelOne have identified MalTerminal, a malware sample that uses OpenAI's GPT-4 to generate either ransomware or a reverse shell at runtime. The malware, described as the earliest known example of its kind, represents a new frontier in LLM-enabled cyber threats.
Unveiled at LABScon 2025, the discovery suggests that large language models (LLMs) are now being embedded directly into malware tools, rather than simply used for external support. Although there is no current evidence that MalTerminal was deployed in real-world attacks, it shows the evolution of malware and the potential issues that could occur.
Going deeper
The malware relies on GPT-4 via a now-deprecated OpenAI API endpoint, indicating it was developed prior to November 2023. The package includes a Windows executable and Python scripts that prompt the attacker to choose between generating ransomware or a reverse shell. It also features FalconShield, a defensive tool capable of analyzing scripts and using GPT to write basic malware analysis reports.
Researchers say MalTerminal belongs to a new class of "LLM-embedded malware" alongside other examples like LAMEHUG and PromptLock. These tools not only generate malicious logic on demand but can also adapt their behavior dynamically, making detection significantly harder.
The researchers warn that LLM-enhanced malware introduces new technical and strategic challenges for defenders, particularly because it blurs the line between tool and operator.
What was said
“MalTerminal contained an OpenAI chat completions API endpoint that was deprecated in early November 2023, suggesting that the sample was written before that date,” said researchers Alex Delamotte, Vitaly Kamluk, and Gabriel Bernadett-Shapiro. They concluded it is likely the earliest sample of LLM-integrated malware discovered so far.
SentinelOne called the trend a “qualitative shift in adversary tradecraft,” warning that LLM-driven runtime logic generation is becoming a powerful tool for attackers.
FAQs
What makes LLM-enabled malware different from traditional malware?
LLM-enabled malware can dynamically generate its own code and behavior in real-time, making it harder to detect with traditional signature-based methods. It may also respond to inputs or adapt based on the environment it’s deployed in.
Is MalTerminal a real threat to users right now?
There is no evidence that MalTerminal has been used in the wild. It may be a proof-of-concept or internal tool used by malicious actors, but its discovery confirms that this kind of malware is technically possible.
How do hidden prompts in phishing emails fool AI scanners?
Attackers embed invisible instructions in the HTML of emails that target AI-based filters, instructing them to misclassify harmful content as benign. This technique is known as prompt injection.
What is “LLM poisoning” and how is it used?
LLM poisoning involves manipulating the data or prompts fed to AI models in order to trick them into making incorrect or unsafe decisions, such as misidentifying malware as safe content.
What steps can organizations take to defend against AI-enabled threats?
Defenders should invest in behavior-based detection, monitor LLM usage within networks, educate teams about prompt injection risks, and scrutinize third-party tools that integrate AI functionality.