Cybercriminals are using fake AI video tools to spread a powerful new malware called Noodlophile, targeting users through deceptive ads and multi-stage attacks.
A newly discovered malware called Noodlophile is being distributed through fake AI-powered video generation tools. Cybercriminals are luring users with websites posing as advanced media generators, like one called Dream Machine, promoted via Facebook ads. Instead of receiving AI-generated videos, victims end up downloading a powerful infostealer designed to exfiltrate sensitive data from web browsers and crypto wallets.
Morphisec researchers uncovered the campaign, noting that this marks the first known appearance of the Noodlophile malware, which is being sold on the dark web as part of a new malware-as-a-service operation, primarily run by Vietnamese-speaking actors.
Victims are tricked into uploading files to a fake AI video platform and receive a ZIP file in return. Inside is a file named something like “Video Dream MachineAI.mp4.exe” but it's actually a malicious program. With file extensions hidden, many users won’t realize it’s malware.
Once launched, the malware runs through several stealthy steps:
Depending on the antivirus software detected, the malware uses different tactics to avoid being caught, including code injection into legitimate Windows files. It then steals browser credentials, session tokens, and crypto wallet data, sending it all through a Telegram bot that also acts as a command-and-control channel. Some versions include XWorm, giving attackers full remote control of the infected device.
“Noodlophile Stealer represents a new addition to the malware ecosystem,” Morphisec researchers explained. “Previously undocumented in public malware trackers or reports, this stealer combines browser credential theft, wallet exfiltration, and optional remote access deployment.”
Noodlophile is a clear example of how AI-themed platforms are being used to deliver malware at scale. Attackers set up fake tools that look like real AI products and push them through mainstream ads on platforms like Facebook. The result is a convincing setup that draws users in before deploying an infostealer.
The malware runs in memory, uses built-in tools, and sends stolen data through Telegram. These steps reflect a structured and polished malware operation that’s difficult to detect and remove. As more people use AI tools, threats like this are expected to become more common, putting more pressure on cybersecurity teams to catch them early.
Check for signs like poor website design, unrealistic claims, lack of verified reviews, and suspicious download prompts, especially ZIP files disguised as videos.
Immediately disconnect from the internet, run a full malware scan using trusted antivirus software, and reset all browser and crypto wallet credentials.
The hype around AI tools creates trust and curiosity, making users more likely to engage without verifying legitimacy, which are ideal conditions for social engineering.
Telegram bots are increasingly used for data exfiltration and remote control because they’re encrypted, easy to automate, and harder to block.
Implement strict download policies, educate staff on file extension risks, and use endpoint detection tools that monitor for in-memory execution and command-and-control activity.