The Calendar Invite That Hacked Google’s Gemini AI
Imagine this: You wake up, check your calendar, ask Gemini to give you a quick summary of your day… and suddenly your smart home’s windows open, your boiler turns on, and a Zoom call starts without you touching a thing.
Sounds like a sci-fi glitch? It’s not. It’s prompt injection in the wild — and it’s a glimpse into AI’s next big security challenge.
The Discovery
A research team from Tel Aviv University, Technion, and SafeBreach Labs found a vulnerability in Google’s Gemini AI that lets attackers hijack it with nothing more than a Google Calendar invite or an email subject line.
They called the attack “Invitation Is All You Need.”
How the Attack Works — Step by Step
1️⃣ Plant the Payload
The attacker sends a calendar invite with a hidden malicious instruction in the event title or description.
On the surface, it might say:
2️⃣ Wait for You to Use Gemini
When you ask Gemini, “Summarize my calendar for today,” Gemini reads the invite text as part of its context.
3️⃣ The Trick
Gemini can’t tell the difference between “content” and “instructions” — so it thinks the hidden code is something you want it to do.
4️⃣ The Damage
The AI can then:
Control smart home devices (lights, windows, boiler)
Leak your email or calendar data
Start Zoom calls
Launch apps
Stream video
Why This Is a Serious Threat
This is indirect prompt injection — where malicious instructions are planted in data your AI reads, not typed directly into it.
Here’s why that’s scary:
AI as a Blind Executor → Large Language Models (LLMs) don’t inherently know which text is “just information” vs. “an instruction to execute.”
Attack Surface Is Everywhere → Any data source Gemini can read (calendar, emails, docs, web pages) can potentially contain hidden commands.
Crossing Into the Physical World → This isn’t just stealing data — it’s controlling IoT devices, which could mean real-world consequences.
Google’s Fix
When the researchers reported it in February 2025, Google acted fast:
Detection Filters → ML-based systems to spot suspicious prompt patterns.
Output Sanitization → Stripping out harmful instructions before Gemini can act on them.
Confirmation Prompts → Asking the user before executing high-risk actions.
Key Lessons for Security Teams
Expand Threat Models → Treat LLMs and AI agents as part of your critical attack surface.
Secure the Data Pipeline → Sanitize inputs from every source the AI might read.
Adopt Defense in Depth → Detection, filtering, manual approvals, and least privilege access.
Educate Users → Even “safe” AI tools can be manipulated indirectly.
My Take: We’ve officially entered the age where “content is code” — and attackers will use every trick to turn your data into commands.
If your AI can read and act, you must assume someone will try to make it act against you.
Sources:
Regional Sales Director. Gurucul -Advisor Next Gen SIEM UEBA SOAR Data lake, Patented risk analytics platform powered by Ml (Agentic AI).
1wDeepak Kumar CISSP I think nothing will be perfectly built in the future, as we leave scope for improvement and leave vulnerable points. It will be important to see how we make ourselves capable of defending and also resilient!!
Commitment to Continuous Learning
1wThanks for sharing, Deepak
Global CISO | Certified DPO | Cybersecurity Leader | Digital Transformation Strategist | Trusted Advisor to Boards & CXOs , CIO's CISSP | CISM | CISA | CRISC | CIPM | CCISO | ITIL | ISO 27 k | CEH | AWSCSS ,
1wDefinitely worth reading
Security Team Manager at British Telecom
1wThanks for sharing, Deepak