You may want to reconsider using Perplexity Comet browser: vulnerable to a #PromptInjection attack with AI. "... While looking at #Comet, we discovered vulnerabilities which we reported to #Perplexity, and which underline the security challenges faced by #agentic AI implementations in #browsers. The attack demonstrates how easy it is to manipulate AI assistants into performing actions that were prevented by long-standing Web security techniques, and how users need new security and privacy protections in agentic browsers." Disclaimer - the article was made by the Brave browser team's security group. https://lnkd.in/g2EJPTdq
Perplexity Comet browser vulnerable to AI attack: security concerns for agentic browsers
More Relevant Posts
-
A vulnerability in #Perplexity #Comet, an #AIbrowser, allows attackers to inject malicious instructions into webpage content. These instructions can be executed by the AI assistant, #bypassing traditional #websecurity mechanisms. The attack demonstrates the need for new security architectures to prevent #unauthorisedactions and #dataexfiltration. https://lnkd.in/ebuiEunU #tech #media #news
To view or add a comment, sign in
-
“…This vulnerability in Perplexity Comet highlights a fundamental challenge with agentic AI browsers: ensuring that the agent only takes actions that are aligned with what the user wants. As AI assistants gain more powerful capabilities, indirect prompt injection attacks pose serious risks to Web security.Browser vendors must implement robust defenses against these attacks before deploying AI agents with powerful Web interaction capabilities. Security and privacy cannot be an afterthought in the race to build more capable AI tools.Since its inception, Brave has been committed to providing industry-leading privacy and security protections to its users, and to promoting Web standards that reflect this commitment. In the next blog post of the series we will talk about Brave’s approach to securing the browser agent in order to deliver secure AI browsing to our nearly 100 million users…“ https://lnkd.in/g_ZcTJ4A
To view or add a comment, sign in
-
⚠️ Be cautious when using AI to summarize websites. You might unintentionally expose your login details. Hackers are already experimenting with prompt injection—a way to trick AI assistants into bypassing long-established web security measures. The Brave team recently shared an eye-opening example of this in action: 👉 https://lnkd.in/eKpsX3Q8 AI is powerful, but like any tool, it comes with risks. Awareness is the first step to staying secure.
To view or add a comment, sign in
-
With A2SPA this will not occur! This is exactly why we built A2SPA (Agent-to-Secure Payload Authorization). Every AI agent today runs unauthenticated by default, making prompt injection attacks like this inevitable. With A2SPA, every command is cryptographically signed and verified before execution — stopping these exploits at the protocol level. If A2SPA were implemented: • Each AI command would be cryptographically signed and verified. • Unauthorized or tampered prompts would be blocked before execution. • Full logs would be kept for auditing and accountability. Learn more: https://lnkd.in/ewnkBqMb You never automate without authentication—- I sound like a broken record;) Never trust without verifying Https://AImodularity.com/A2SPA #promptinjection #A2SPA #perplexity #cybersecurity
SEO Content/ News Writer & Editor | Legal & Academic Researcher & Consultant | Remote Educator | Thesis/Dissertation Specialist
Heads up, AI community—this is a big one. 🚨 Perplexity's Comet browser is reportedly vulnerable to prompt injection attacks, putting user privacy and data at risk. If true, this isn’t just a minor bug. It’s the kind of exploit that could allow malicious actors to: 🔓 Extract sensitive user data 🎭 Manipulate AI behavior ⚠️ Bypass key security controls This raises serious questions about how we secure AI-powered browsers—especially as they handle more of our queries, history, and personal context. Has your organization tested for prompt injection risks? Are we moving fast enough on AI safety? #CyberSecurity #AI #PromptInjection #Perplexity #DataPrivacy #TechNews #AISafety #InfoSec #Vulnerability #CyberAware https://lnkd.in/gAc3BFhn
To view or add a comment, sign in
-
sooner or later, security, and in this specific case, password managers, will have to work in parallel with AI browsers, MCPs, agents, etc. this is good news, but they will have to be completely transparent about how this will work. https://lnkd.in/dUHt8rQx
To view or add a comment, sign in
-
When stuck in traffic, Cris usually chats to her dog, but this time too, she used the moment to read about how AI browsers can’t tell legit sites from fake ones. That’s exactly the challenge we’re tackling at isAI — building solutions to reduce spoofing and make the web safer.
isAI Co-Founder CPO/CSO - Consultant - bp User Experience Researcher - AI Governance- AI for Social Good - Strategy - Tech4good - Business Mentor
Yesterday, stuck in the holiday traffic with my dog, I started reading this article written by Amber Bouman on a report from research conducted by Guardio on how AI browsers can't tell legitimate websites from malicious ones. Turns out AI browsers are creating a serious security blindspot that most users aren't aware of. What I found most alarming on this Tom's Guide article wasn't just the technical vulnerabilities but how these AI tools fundamentally misunderstand human caution. Key findings that caught my attention: • AI browsers inherit a critical flaw - they trust too easily and act without the natural skepticism humans have • In testing, an AI browser completed a full purchase on a fake Walmart site, auto-filling personal and financial details without any confirmation • When sent phishing emails, the AI actually marked malicious links as "to-do items" and clicked them automatically • Security becomes essentially "a coin toss" when AI is the only decision-maker • These browsers prioritize user experience over security - they're designed to please, not protect • The researchers at Guardio demonstrated this repeatedly with Perplexity's Comet browser, showing how easily it hands over sensitive information This research perfectly illustrates why verification and authentication are becoming more critical than ever. As we automate more of our digital lives, we need better systems to distinguish legitimate from malicious. What's your take - are we moving too fast with AI automation without considering the security implications? #CyberSecurity #AI #DigitalSafety #BrandProtection #ConsumerSafety https://lnkd.in/dURbmGbJ
To view or add a comment, sign in
-
At isAI we are getting heavily into examining scams, authenticity, reality... and, above all, how people navigate all of this in the context of AI. It's not easy. What we are seeing is layers of technology that have yet to get a grip on the new patterns in a defensive manner. What interests me in Cristina's post is the picture of AI browsers inheriting "AI’s built-in vulnerabilities – the tendency to act without full context, to trust too easily and to execute instructions without the skepticism humans naturally apply.” The scepticism that humans naturally apply is proving tough in the world of scams, as AI removes many of the visual clues that had previously protected us (a bit). If we then have browsers - here referred to as AI browsers - such as #Perplexity, that are designed to speed the interaction, where do we end up? #Google has approximately 70% of the global browser market, through Chrome. Chrome has significant security built in. As our consumption patterns change towards AI browsers, where does this leave us from an anti-scam perspective?
isAI Co-Founder CPO/CSO - Consultant - bp User Experience Researcher - AI Governance- AI for Social Good - Strategy - Tech4good - Business Mentor
Yesterday, stuck in the holiday traffic with my dog, I started reading this article written by Amber Bouman on a report from research conducted by Guardio on how AI browsers can't tell legitimate websites from malicious ones. Turns out AI browsers are creating a serious security blindspot that most users aren't aware of. What I found most alarming on this Tom's Guide article wasn't just the technical vulnerabilities but how these AI tools fundamentally misunderstand human caution. Key findings that caught my attention: • AI browsers inherit a critical flaw - they trust too easily and act without the natural skepticism humans have • In testing, an AI browser completed a full purchase on a fake Walmart site, auto-filling personal and financial details without any confirmation • When sent phishing emails, the AI actually marked malicious links as "to-do items" and clicked them automatically • Security becomes essentially "a coin toss" when AI is the only decision-maker • These browsers prioritize user experience over security - they're designed to please, not protect • The researchers at Guardio demonstrated this repeatedly with Perplexity's Comet browser, showing how easily it hands over sensitive information This research perfectly illustrates why verification and authentication are becoming more critical than ever. As we automate more of our digital lives, we need better systems to distinguish legitimate from malicious. What's your take - are we moving too fast with AI automation without considering the security implications? #CyberSecurity #AI #DigitalSafety #BrandProtection #ConsumerSafety https://lnkd.in/dURbmGbJ
To view or add a comment, sign in
-
Perplexity AI's Comet browser bug could have exposed your data to hackers, report warns A serious security flaw in Perplexity AI's Comet browser may have allowed hackers to steal users' sensitive information, including email addresses and login credentials, according to new research from Brave. The vulnerability, detailed in a blog post from Brave, was linked to the way Comet's built-in AI assistant processed webpages. Unlike traditional browsers, Comet allows users to ask its assistant to summarise content or even perform tasks on their behalf. Brave's security team discovered that Comet could be tricked into following hidden malicious instructions embedded in ordinary webpages or even social media comments. This technique, known as indirect prompt injection, made it possible for attackers to smuggle commands into otherwise harmless-looking text. Source - https://lnkd.in/dMniBiym #ai #Perplexity #QA #software
To view or add a comment, sign in
-
Another day, Another AI Vulnerability Found. The The Hacker News just reported that researchers tricked AI-powered browsers into following malicious instructions hidden in web pages — leading to data theft, unauthorized actions, and manipulated outputs. "When your browser becomes an AI agent, every webpage is a potential attack vector" This is the next evolution of prompt injection: no phishing link to click, no shady extension to install. Just ordinary browsing — and suddenly the AI assistant embedded in your workflow is compromised. For enterprises, this raises urgent questions: - How do you know if an AI agent “reading” data hasn’t also absorbed hidden malicious instructions? - What happens when that same agent can execute tool calls or MCP actions downstream? - Who’s watching the watchers when your agents act at machine speed? At Javelin, we’ve been preparing for exactly this. Our AI Security Fabric enforces runtime guardrails across agent workflows, scanning every MCP server and tool call, and blocking manipulations before they cause damage. With Javelin RED, we continuously red team agents against prompt injection, context poisoning, and browser-borne attacks to uncover blind spots before adversaries do. As AI agents move deeper into business-critical workflows, the lesson is clear: every new capability becomes a new attack surface. 👉 If you’re deploying AI agents internally or externally, let’s talk. We’ll show you how to secure them in real time. https://lnkd.in/ed52gWfX #AIsecurity #AgentSecurity #PromptInjection #RuntimeProtection #GetJavelin
To view or add a comment, sign in
-
Numerous tech companies are vying to harness the power of AI for a new generation of web browsers. Probably the most prominent is Perplexity's Comet, which it describes as a "personal assistant and thinking partner" while you surf the web. Unsurprisingly, that approach can have enormous cybersecurity implications. As privacy-focused browser company Brave noted in a blog post last week, it's alarmingly easy for bad actors to trick Perplexity's browser AI into following malicious instructions embedded in publicly available content. https://lnkd.in/g55PVkT5
To view or add a comment, sign in