Source: www.securityweek.com – Author: Eduard Kovacs
Browser security firm LayerX has disclosed a new attack method that works against popular gen-AI tools. The attack involves browser extensions and it can be used for covert data exfiltration.
The method, named Man-in-the-Prompt, has been tested against several highly popular large language models (LLMs), including ChatGPT, Gemini, Copilot, Claude and DeepSeek.
LayerX demonstrated that any browser extension, even ones that do not have special permissions, can access these AI tools and inject prompts instructing them to provide sensitive data and exfiltrate it.
“When users interact with an LLM-based assistant, the prompt input field is typically part of the page’s Document Object Model (DOM). This means that any browser extension with scripting access to the DOM can read from, or write to, the AI prompt directly,” LayerX explained.
Learn More About Securing AI at SecurityWeek’s AI Risk Summit – August 19-20, 2025 at the Ritz-Carlton, Half Moon Bay
The attack poses the biggest threat to LLMs that are built and customized by enterprises for internal use. These AI models often handle highly sensitive information such as intellectual property, corporate documents, personal information, financial documents, internal communications, and HR data.
A proof-of-concept (PoC) targeting ChatGPT showed how a malicious extension with no permissions can open a new browser tab in the background, open the chatbot, and instruct it to provide information. The hacker can then exfiltrate the data to a command and control (C&C) server and erase the chat history to cover their tracks.
The attacker can interact with the extension from a C&C server that can be remote or hosted locally.
Advertisement. Scroll to continue reading.
In a PoC targeting Google’s Gemini, LayerX showed how an attacker could target corporate data through the AI’s integration with Google Workspace, including Gmail, Docs, Meet and other applications. This enables a malicious browser extension to interact with Gemini and inject prompts instructing it to extract emails, contacts, files and folders, and meeting invites and summaries.
An attacker could also obtain a list of the targeted enterprise’s customers, get a summary of calls, collect information on people, and look up sensitive information such as PII and intellectual property.
The attacker would need to trick the targeted user into installing a malicious browser extension in order to conduct a Man-in-the-Prompt attack, but an analysis conducted by LayerX found that 99% of enterprises use at least one browser extension and 50% have more than ten extensions. This suggests that in many cases it might not be too difficult for threat actors to trick targets into installing one more extension.
LayerX told SecurityWeek that it initially reported its findings to Google, but the tech giant assessed that this is not actually a software vulnerability, which is what other LLM developers are also likely to believe.
The security firm agrees that this is not actually a vulnerability that would require the allocation of a CVE, but rather an overall weakness that exploits the low level of privileges required to interact with LLMs.
LayerX recommends monitoring DOM interactions with gen-AI tools in search of listeners and webhooks that interact with AI prompts, and blocking browser extensions based on behavioral risk.
Related: Flaw in Vibe Coding Platform Base44 Exposed Private Enterprise Applications
Related: From Ex Machina to Exfiltration: When AI Gets Too Curious
Related: OpenAI’s Sam Altman Warns of AI Voice Fraud Crisis in Banking
Original Post URL: https://www.securityweek.com/browser-extensions-pose-serious-threat-to-gen-ai-tools-handling-sensitive-data/
Category & Tags: Artificial Intelligence,AI,browser extension,ChatGPT,Gemini,Man-in-the-Prompt – Artificial Intelligence,AI,browser extension,ChatGPT,Gemini,Man-in-the-Prompt
Views: 2


















































