web analytics

This ‘Lethal Trifecta’ Can Trick AI Browsers Into Stealing Your Data – Source: www.techrepublic.com

Rate this post

Source: www.techrepublic.com – Author: Grant Harvey

Published

AI browsers have a critical flaw: They can’t tell safe commands from malicious text. Patches help, but guardrails are essential to keeping your data safe.

Screenshot of Perplexity's Comet browser.
Screenshot of Perplexity’s Comet browser.

Remember when your biggest browser worry was accidentally clicking a sketchy ad? Well, the browser company Brave just exposed a vulnerability in Perplexity’s Comet browser that security experts are calling the “Lethal Trifecta”: When AI has access to untrusted data (websites), private data (your accounts), and can communicate externally (send messages).

Here’s what happened

  1. Researchers discovered they could hide malicious instructions in regular web content (think Reddit comments or even invisible text on websites).
  2. When users clicked “Summarize this page,” the AI would execute these hidden commands like a sleeper agent activated by a code word.
  3. The AI then followed the hidden instructions to:
    1. Navigate to the user’s Perplexity account and grab their email.
    2. Trigger a password reset to get a one-time password.
    3. Jump over to Gmail to read that password.
    4. Send both the email and password back to the attacker via a Reddit comment.
    5. Game over. Account hijacked.

Here’s what makes this extra spicy

This “bug” is actually a fundamental flaw in how AI works. As one security researcher put it: “Everything is just text to an LLM.” So your browser’s AI literally can’t tell the difference between your command to “summarize this page” and hidden text saying “steal my banking credentials.” They’re both just… words.

The Hacker News crowd is split on this. Some argue this makes AI browsers inherently unsafe, like building a lock that can’t distinguish between a key and a crowbar. Others say we just need better guardrails, like requiring user confirmation for sensitive actions or running AI in isolated sandboxes.

Why this matters

We’re watching a collision between Silicon Valley’s “move fast and break things” mentality and the reality that “things” now includes an agent who can access your bank account. And the uncomfortable truth = every AI browser with these capabilities has this vulnerability. Why do you think OpenAI only offers ChatGPT Agent through a sandboxed cloud instance right now?

Now, Perplexity patched this specific attack, but the underlying problem remains: How do you build an AI assistant that’s both helpful and can’t be turned against you?

Brave suggests several fixes

  1. Clearly separating user commands from web content.
  2. Requiring user confirmation for sensitive actions.
  3. Isolating AI browsing from regular browsing.

Share Article

Image of Grant Harvey

Grant Harvey

Grant Harvey is the daily writer of The Neuron, a TechnologyAdvice AI newsletter for non-technical people. He spends his days analyzing AI tools and the industry-at-large, then breaking them down in a language understandable by anyone.

Original Post URL: https://www.techrepublic.com/article/news-ai-browsers-security-flaw-perplexity-comet/

Category & Tags: Artificial Intelligence,Big Data,News,Security,Software – Artificial Intelligence,Big Data,News,Security,Software

Views: 5

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post