Source: go.theregister.com – Author: Gareth Halfacree
Black hat A trio of researchers has disclosed a major prompt injection vulnerability in Google’s Gemini large language model-powered applications.
This allows for attacks ranging from “permanent memory poisoning” to unwanted video streaming, email exfiltration, and even taking over the target’s smart home systems to plunge them into darkness or open a powered window, all triggered by nothing more than a simple Google Calendar invitation or email.
“You used to believe that adversarial attacks against AI-powered systems are complex, impractical, and too academic,” researchers Ban Nassi, Stav Cohen, and Or Yair, of Tel-Aviv University, Technion, and SafeBreach respectively, explained of their findings. “In reality, an indirect prompt injection in a Google invitation is all you need to exploit Gemini for Workspace’s agentic architecture to trigger the following outcomes:
“Toxic content generation; spamming; deleting events from the user’s calendar; opening the windows in a victim’s apartment; activating the boiler in a victim’s apartment; turning the light off in a victim’s apartment; video streaming a user via Zoom; exfiltrating a user’s emails via the browser; geolocating the user via the browser.”
The attack, dubbed “Invitation is All You Need,” is a new twist on “prompt injection,” which sees instructions to large language models inserted in materials they are only supposed to use for reference. The same approach was previously used to convince LLM-powered summary systems to review research papers favourably, force SQLite Model Context Protocol (MCP) servers to leak customer data, break into private chat channels, improve the odds of being hired or boost websites’ standings – and protections against it are sometimes defeated as easily as pressing the space bar.
The team found that, as with prior prompt injection vulnerabilities, the issue stems from large language models’ inability to distinguish between inputs which are user prompts and inputs which are for reference – taking instructions written in materials like emails and calendar invitations and acting on them as though they were part of the prompt.
When the only output of an LLM was an answer-shaped string of text, that was a relatively minor problem; in the brave new era of “agentic AI,” where the LLM can issue its own commands to external tools, the vulnerability brings with it considerably more risk.
“Our TARA [Threat Analysis and Risk Assessment] reveals that 73 percent of the analysed threats pose High-Critical risk to end users,” the researchers warn, “emphasising the need for the deployment of immediate mitigations.”
Demonstrated attacks include taking control of the target’s smart-home boiler, opening and closing powered windows, turning lights on and off, and opening applications which leak email contents, transmit the user’s physical location, or even start a live video stream.
- Google to Iran: Yes, we see you using Gemini for phishing and scripting. We’re onto you
- How to trick ChatGPT into revealing Windows keys? I give up
- Everyone’s deploying AI, but no one’s securing it – what could go wrong?
- Microsoft eggheads say AI can never be made secure – after testing Redmond’s own products
In response to the researchers’ disclosure, a Google spokesperson told us: “Google acknowledges the research ‘Invitation Is All You Need’ by Ben Nassi, Stav Cohen, and Or Yair, responsibly disclosed via our AI Vulnerability Rewards Program (VRP). The paper detailed theoretical indirect prompt injection techniques affecting LLM-powered assistants and was shared with Google in the spirit of improving user security and safety.
“In response, Google initiated a focused, high-priority effort to accelerate the mitigation of issues identified in the paper. Over the course of our work, we deployed multiple layered defences, including: enhanced user confirmations for sensitive actions; robust URL handling with sanitisation and Trust Level Policies; and advanced prompt injection detection using content classifiers. These mitigations were validated through extensive internal testing and deployed to all users ahead of the disclosure.”
More information on the attack, which was disclosed privately to Google in February this year and was presented at the Black Hat USA conference this week and will be presented again at DEF CON 33 on Saturday, is available on the researchers’ website.
Original Post URL: https://go.theregister.com/feed/www.theregister.com/2025/08/08/infosec_hounds_spot_prompt_injection/
Category & Tags: –
Views: 3