Perplexity AI's Comet browser bug could have exposed your data to hackers, report warns A serious security flaw in Perplexity AI's Comet browser may have allowed hackers to steal users' sensitive information, including email addresses and login credentials, according to new research from Brave. The vulnerability, detailed in a blog post from Brave, was linked to the way Comet's built-in AI assistant processed webpages. Unlike traditional browsers, Comet allows users to ask its assistant to summarise content or even perform tasks on their behalf. Brave's security team discovered that Comet could be tricked into following hidden malicious instructions embedded in ordinary webpages or even social media comments. This technique, known as indirect prompt injection, made it possible for attackers to smuggle commands into otherwise harmless-looking text. Source - https://lnkd.in/dMniBiym #ai #Perplexity #QA #software
Perplexity AI's Comet browser bug exposed user data, warns Brave
More Relevant Posts
-
Perplexity’s new AI-powered browser, Comet, recently suffered a major security flaw that allowed hidden text on a webpage to trick its AI assistant into leaking sensitive user information. The issue, now fixed, highlights the risks of prompt injection attacks, where malicious content manipulates AI into performing unintended actions. Experts warn that as AI becomes more integrated into browsers and other tools, protecting against these novel threats will require stronger safeguards and closer user oversight. https://lnkd.in/eSgXU3eR
To view or add a comment, sign in
-
“…This vulnerability in Perplexity Comet highlights a fundamental challenge with agentic AI browsers: ensuring that the agent only takes actions that are aligned with what the user wants. As AI assistants gain more powerful capabilities, indirect prompt injection attacks pose serious risks to Web security.Browser vendors must implement robust defenses against these attacks before deploying AI agents with powerful Web interaction capabilities. Security and privacy cannot be an afterthought in the race to build more capable AI tools.Since its inception, Brave has been committed to providing industry-leading privacy and security protections to its users, and to promoting Web standards that reflect this commitment. In the next blog post of the series we will talk about Brave’s approach to securing the browser agent in order to deliver secure AI browsing to our nearly 100 million users…“ https://lnkd.in/g_ZcTJ4A
To view or add a comment, sign in
-
The security vulnerability in Perplexity's Comet, a browser-based AI assistant, allows attackers to embed malicious instructions in webpage content that can be executed by the AI assistant, posing significant security and privacy risks. Use Brave because it’s a privacy-first browser that protects users from threats like trackers, fingerprinting, and malicious scripts. Unlike traditional browsers, Brave blocks invasive ads by default and offers features like HTTPS upgrades, private browsing with Tor, and Shields to prevent third-party data collection. Its security research, such as uncovering the Comet AI vulnerability, shows Brave’s commitment to safety. Whether browsing, shopping, or using AI tools, Brave keeps your data private and your experience secure. #Brave #Comet #Perplexity #AI https://lnkd.in/gFFacYfQ
To view or add a comment, sign in
-
🚨 Caution: Indirect Prompt Injection in Browsers Using AI 🚨 Brave’s recent blog post explains a critical security risk in AI-powered browsers like Perplexity Comet: indirect prompt injection. This attack can manipulate browser agents and LLMs by embedding malicious instructions in web content—sometimes without visible clues to human readers. 🔶 Key Takeaways: - AI agents that read and act on web pages can be tricked by hidden or obfuscated instructions. - These attacks could compromise privacy, spoof actions, or leak sensitive info—even if a user never clicks anything suspicious. - Developers must take serious precautions when integrating LLMs with browser automation and user data. 🔒 Warning for Users & Developers: If your work relies on browser automation, AI-driven agents, or LLM-powered research, stay vigilant about the content being processed. Always audit new AI features for security, and never assume that visible content is all that matters for AI interpretation. Read more: https://lnkd.in/gR-uJUg2
To view or add a comment, sign in
-
Another day, Another AI Vulnerability Found. The The Hacker News just reported that researchers tricked AI-powered browsers into following malicious instructions hidden in web pages — leading to data theft, unauthorized actions, and manipulated outputs. "When your browser becomes an AI agent, every webpage is a potential attack vector" This is the next evolution of prompt injection: no phishing link to click, no shady extension to install. Just ordinary browsing — and suddenly the AI assistant embedded in your workflow is compromised. For enterprises, this raises urgent questions: - How do you know if an AI agent “reading” data hasn’t also absorbed hidden malicious instructions? - What happens when that same agent can execute tool calls or MCP actions downstream? - Who’s watching the watchers when your agents act at machine speed? At Javelin, we’ve been preparing for exactly this. Our AI Security Fabric enforces runtime guardrails across agent workflows, scanning every MCP server and tool call, and blocking manipulations before they cause damage. With Javelin RED, we continuously red team agents against prompt injection, context poisoning, and browser-borne attacks to uncover blind spots before adversaries do. As AI agents move deeper into business-critical workflows, the lesson is clear: every new capability becomes a new attack surface. 👉 If you’re deploying AI agents internally or externally, let’s talk. We’ll show you how to secure them in real time. https://lnkd.in/ed52gWfX #AIsecurity #AgentSecurity #PromptInjection #RuntimeProtection #GetJavelin
To view or add a comment, sign in
-
Attorneys Should Avoid AI-Enabled Browsers. AI is transforming everything about legal work. Not all AI tools, however, are created equal. A recent article by Brave exposes a serious vulnerability in AI-enabled browsers: prompt injection attacks. These attacks can manipulate AI models into leaking sensitive information, executing unintended actions, or exposing private data without the user realizing it. For attorneys, this isn’t just a technical issue. It’s an ethical and professional one. Many AI-enabled browsers rely on third-party APIs and cloud-based models that upload user input to third-party external servers. For attorneys that most likely includes confidential client and case data. That data may be used for model training, stored indefinitely, or even exposed in future outputs. Worse, these systems are often susceptible to malicious websites injecting hidden prompts that hijack the AI’s behavior. Why this should matter to attorneys and their tech support teams: Client confidentiality is non-negotiable Legal professionals are held to strict ethical standards Trust in your tools should be earned, not assumed Attorneys need AI tools that are secure by design, not bolted onto browsers as an afterthought. If your AI assistant is embedded in a browser, ask yourself: Where is my data going? Who has access to it? Can I truly trust this tool with privileged information? Let’s be smart about how we adopt AI in the legal field. Convenience should never come at the cost of confidentiality. Source: https://lnkd.in/g-dsScf3 #LegalTech #AI #CyberSecurity #AttorneyTools #DataPrivacy #PromptInjection #EthicsInLaw #ConfidentialityMatters
To view or add a comment, sign in
-
This shows why agentic AI browsers need new security models before mass adoption. Brave researchers discovered a vulnerability in Perplexity’s Comet AI browser that allowed attackers to hide malicious prompts in webpages or social media comments. When users clicked “Summarize this page,” Comet executed those hidden instructions leading to stolen emails, OTPs, and even account takeovers. I found this insightful, so I wanted to share it. Here is the link to the website: https://lnkd.in/gV9DF8-5
To view or add a comment, sign in
-
AI security alert! 🚨 Brave's blog on "Indirect Prompt Injection in Perplexity Comet" https://lnkd.in/g3s27Gia This post sheds critical light on a growing challenge in the AI landscape: indirect prompt injection, especially when general AI agents intersect with user-facing applications like browsers. We often interact with powerful, general-purpose AI agents through tools like Perplexity or ChatGPT. But imagine when your own specialized AI agent is operating alongside these, or even within your browser. The more specific your agent's task, the more complicated and vulnerable it can become when interacting with general AI models that might be manipulated. It's a complex dance between utility and security. This vulnerability highlights a crucial insight: relying solely on general AI models for sensitive or specialized tasks, particularly when they process external, untrusted content, introduces significant risks. The safer, more robust path often lies in building your own, highly specific AI agents, underpinned by your own controlled and curated knowledge base. 🧙 At VaultSage.ai, we understood this challenge from day one. Instead of diving into overly complex, general AI integrations, we opted for a simple, yet profoundly secure approach for our AI agent design: 1. Select Your File: Users bring their own data. 2. Choose the Role of Your AI Agent: The AI's scope and knowledge are precisely defined and limited to your selected content. This focused strategy ensures that our AI agents operate with precision and security, minimizing exposure to indirect prompt injection risks by keeping the knowledge base proprietary and controlled. It's about empowering users with AI that works for them, securely and reliably. What are your thoughts on balancing AI utility with robust security? #AISecurity #PromptInjection #AgenticAI #Cybersecurity #VaultSage #NURIE
To view or add a comment, sign in
-
More on the topic of agentic AI security and privacy risks: check out this report from Brave on "the threat of instruction injection". They describe the "lethal trifecta," where an AI browser "has access to untrusted data (websites), private data (your accounts), and can communicate externally (send messages)" (from The Neuron - AI News). The link below includes a short video demonstration. https://lnkd.in/eZ7K-SET
To view or add a comment, sign in
-
Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts Cybersecurity researchers have demonstrated a new prompt injection technique called PromptFix that tricks a generative artificial intelligence (GenAI) model into carrying out intended actions by embedding the malicious instruction inside a fake CAPTCHA check on a web page. Described by Guardio Labs an "AI-era take on the ClickFix scam," the attack technique demonstrates how AI-driven browsers, such as Perplexity's Comet, that promise to automate mundane tasks like shopping for items online or handling emails on behalf of users can be deceived into interacting with phishing landing pages or fraudulent lookalike storefronts without the human user's knowledge or intervention. "With PromptFix, the approach is different: We don't try to glitch the model into obedience," Guardio researchers Nati Tal and Shaked Chen said. "Instead, we mislead it using techniques borrowed from the human social engineering playbook – appealing directly to its core design goal: to help its human quickly, completely, and without hesitation." https://lnkd.in/gJT_UXTH Please follow Sakshi Sharma for such content. #DevSecOps, #CyberSecurity, #DevOps, #SecOps, #SecurityAutomation, #ContinuousSecurity, #SecurityByDesign, #ThreatDetection, #CloudSecurity, #ApplicationSecurity, #DevSecOpsCulture, #InfrastructureAsCode, #SecurityTesting, #RiskManagement, #ComplianceAutomation, #SecureSoftwareDevelopment, #SecureCoding, #SecurityIntegration, #SecurityInnovation, #IncidentResponse, #VulnerabilityManagement, #DataPrivacy, #ZeroTrustSecurity, #CICDSecurity, #SecurityOps
To view or add a comment, sign in