3 min read

Microsoft Copilot flaw exposed data via zero-click exploit

Microsoft Copilot flaw exposed data via zero-click exploit

AI security firm Aim Security has disclosed a critical, zero-click vulnerability in Microsoft 365 Copilot, the AI assistant integrated into Microsoft’s enterprise software suite. The flaw, dubbed "EchoLeak," could have allowed attackers to steal sensitive corporate data, including protected health information (PHI), simply by sending a specially crafted email, requiring no interaction from the user. Microsoft has since patched the vulnerability.

 

What happened

In January 2025, researchers at Aim Security discovered the novel attack chain and responsibly disclosed it to the Microsoft Security Response Center (MSRC). According to reports, Microsoft initially attempted a fix in April but, after discovering additional issues, released a complete, server-side patch in May 2025. The coordinated public disclosure occurred on June 11-12, 2025. Microsoft has stated that it is not aware of any customers being affected or any malicious exploitation of the flaw in the wild.

 

By the numbers

The vulnerability is tracked under the identifier CVE-2025-32711 and has been assigned a CVSS score of 9.3. The flaw affects Microsoft 365 Copilot, an AI tool used across widely adopted applications like Outlook, Teams, Word, and SharePoint, platforms commonly used to handle sensitive patient and organizational data.

 

Going deeper

The EchoLeak attack chain bypassed several of Microsoft’s security guardrails. An attacker sends an email to a target. The email contains hidden instructions (an indirect prompt injection) cleverly phrased to bypass Microsoft's Cross-Prompt Injection Attack (XPIA) filters.

The user does not need to open or click anything. Later, when the user asks Copilot a question, its Retrieval-Augmented Generation (RAG) engine processes the malicious email as relevant context.

The hidden instructions command Copilot to find and exfiltrate sensitive data from the user's environment (e.g., from other emails, documents, or chats).

The data is leaked by embedding it within a markdown image link that abuses a trusted Microsoft Teams URL, bypassing Microsoft’s Content Security Policy (CSP). The browser automatically attempts to fetch the "image," silently sending the sensitive data to an attacker-controlled server.

 

The intrigue

EchoLeak represents the first documented zero-click vulnerability targeting a major generative AI assistant. Aim Security has termed the core issue an "LLM Scope Violation," a new class of exploit where an untrusted external input (the email) tricks the AI model into acting on privileged internal data. This violates the principle of least privilege, turning the AI's own logic against itself. The five-month period from discovery to a complete fix also shows the complexity and novelty of securing AI systems.

 

Why it matters

For healthcare organizations and their business associates, this vulnerability posed a significant potential risk. While no patient data was reportedly compromised, the flaw in a core enterprise tool like Copilot created a pathway for exfiltrating any data the AI could access. This includes sensitive PHI and personally identifiable information (PII) stored in emails, OneDrive files, SharePoint sites, and Teams chats. The incident proves the need for covered entities to include AI-specific threats in their HIPAA Security Rule risk analysis.

 

What they're saying

Microsoft stated, "We appreciate Aim for identifying and responsibly reporting this issue so it could be addressed before our customers were impacted. We have already updated our products to mitigate this issue, and no customer action is required."

Adir Gruss, CTO of Aim Security, stressed the severity, telling Fortune, "This vulnerability represents a significant breakthrough... it demonstrates how attackers can automatically exfiltrate the most sensitive information from Microsoft 365 Copilot’s context without requiring any user interaction whatsoever."

Ensar Seker, CISO at SOCRadar, noted the broader implications, "This signals a broader architectural flaw across the AI assistant space – one that demands runtime guardrails, stricter input scoping, and inflexible separation between trusted and untrusted content."

 

FAQs

What is a zero-click vulnerability?

A zero-click vulnerability is a security flaw that can be exploited by an attacker without any interaction from the victim. The user does not need to click a malicious link, open a file, or download an attachment for the attack to be successful.

 

What is an "LLM Scope Violation"?

Coined by Aim Security, this term describes a new type of AI vulnerability where untrusted external input (like an email from an outside sender) manipulates a Large Language Model (LLM) into accessing and acting upon privileged, internal data, thereby violating its intended operational scope.

 

What is Retrieval-Augmented Generation (RAG)?

RAG is a technique used by AI systems like Copilot to improve the quality of their responses. It works by retrieving relevant, up-to-date information from a specific set of documents or data sources (like a user's emails and files) and using that information to generate a more accurate and context-aware answer.