Another day, Another AI Vulnerability Found. The The Hacker News just reported that researchers tricked AI-powered browsers into following malicious instructions hidden in web pages — leading to data theft, unauthorized actions, and manipulated outputs. "When your browser becomes an AI agent, every webpage is a potential attack vector" This is the next evolution of prompt injection: no phishing link to click, no shady extension to install. Just ordinary browsing — and suddenly the AI assistant embedded in your workflow is compromised. For enterprises, this raises urgent questions: - How do you know if an AI agent “reading” data hasn’t also absorbed hidden malicious instructions? - What happens when that same agent can execute tool calls or MCP actions downstream? - Who’s watching the watchers when your agents act at machine speed? At Javelin, we’ve been preparing for exactly this. Our AI Security Fabric enforces runtime guardrails across agent workflows, scanning every MCP server and tool call, and blocking manipulations before they cause damage. With Javelin RED, we continuously red team agents against prompt injection, context poisoning, and browser-borne attacks to uncover blind spots before adversaries do. As AI agents move deeper into business-critical workflows, the lesson is clear: every new capability becomes a new attack surface. 👉 If you’re deploying AI agents internally or externally, let’s talk. We’ll show you how to secure them in real time. https://lnkd.in/ed52gWfX #AIsecurity #AgentSecurity #PromptInjection #RuntimeProtection #GetJavelin
AI Browsers Vulnerable to Malicious Instructions: How to Protect
More Relevant Posts
-
When stuck in traffic, Cris usually chats to her dog, but this time too, she used the moment to read about how AI browsers can’t tell legit sites from fake ones. That’s exactly the challenge we’re tackling at isAI — building solutions to reduce spoofing and make the web safer.
isAI Co-Founder CPO/CSO - bp User Experience Researcher - AI Governance- AI for Social Good - Strategy - Tech4good - Business Mentor
Yesterday, stuck in the holiday traffic with my dog, I started reading this article written by Amber Bouman on a report from research conducted by Guardio on how AI browsers can't tell legitimate websites from malicious ones. Turns out AI browsers are creating a serious security blindspot that most users aren't aware of. What I found most alarming on this Tom's Guide article wasn't just the technical vulnerabilities but how these AI tools fundamentally misunderstand human caution. Key findings that caught my attention: • AI browsers inherit a critical flaw - they trust too easily and act without the natural skepticism humans have • In testing, an AI browser completed a full purchase on a fake Walmart site, auto-filling personal and financial details without any confirmation • When sent phishing emails, the AI actually marked malicious links as "to-do items" and clicked them automatically • Security becomes essentially "a coin toss" when AI is the only decision-maker • These browsers prioritize user experience over security - they're designed to please, not protect • The researchers at Guardio demonstrated this repeatedly with Perplexity's Comet browser, showing how easily it hands over sensitive information This research perfectly illustrates why verification and authentication are becoming more critical than ever. As we automate more of our digital lives, we need better systems to distinguish legitimate from malicious. What's your take - are we moving too fast with AI automation without considering the security implications? #CyberSecurity #AI #DigitalSafety #BrandProtection #ConsumerSafety https://lnkd.in/dURbmGbJ
To view or add a comment, sign in
-
At isAI we are getting heavily into examining scams, authenticity, reality... and, above all, how people navigate all of this in the context of AI. It's not easy. What we are seeing is layers of technology that have yet to get a grip on the new patterns in a defensive manner. What interests me in Cristina's post is the picture of AI browsers inheriting "AI’s built-in vulnerabilities – the tendency to act without full context, to trust too easily and to execute instructions without the skepticism humans naturally apply.” The scepticism that humans naturally apply is proving tough in the world of scams, as AI removes many of the visual clues that had previously protected us (a bit). If we then have browsers - here referred to as AI browsers - such as #Perplexity, that are designed to speed the interaction, where do we end up? #Google has approximately 70% of the global browser market, through Chrome. Chrome has significant security built in. As our consumption patterns change towards AI browsers, where does this leave us from an anti-scam perspective?
isAI Co-Founder CPO/CSO - bp User Experience Researcher - AI Governance- AI for Social Good - Strategy - Tech4good - Business Mentor
Yesterday, stuck in the holiday traffic with my dog, I started reading this article written by Amber Bouman on a report from research conducted by Guardio on how AI browsers can't tell legitimate websites from malicious ones. Turns out AI browsers are creating a serious security blindspot that most users aren't aware of. What I found most alarming on this Tom's Guide article wasn't just the technical vulnerabilities but how these AI tools fundamentally misunderstand human caution. Key findings that caught my attention: • AI browsers inherit a critical flaw - they trust too easily and act without the natural skepticism humans have • In testing, an AI browser completed a full purchase on a fake Walmart site, auto-filling personal and financial details without any confirmation • When sent phishing emails, the AI actually marked malicious links as "to-do items" and clicked them automatically • Security becomes essentially "a coin toss" when AI is the only decision-maker • These browsers prioritize user experience over security - they're designed to please, not protect • The researchers at Guardio demonstrated this repeatedly with Perplexity's Comet browser, showing how easily it hands over sensitive information This research perfectly illustrates why verification and authentication are becoming more critical than ever. As we automate more of our digital lives, we need better systems to distinguish legitimate from malicious. What's your take - are we moving too fast with AI automation without considering the security implications? #CyberSecurity #AI #DigitalSafety #BrandProtection #ConsumerSafety https://lnkd.in/dURbmGbJ
To view or add a comment, sign in
-
Perplexity AI's Comet browser bug could have exposed your data to hackers, report warns A serious security flaw in Perplexity AI's Comet browser may have allowed hackers to steal users' sensitive information, including email addresses and login credentials, according to new research from Brave. The vulnerability, detailed in a blog post from Brave, was linked to the way Comet's built-in AI assistant processed webpages. Unlike traditional browsers, Comet allows users to ask its assistant to summarise content or even perform tasks on their behalf. Brave's security team discovered that Comet could be tricked into following hidden malicious instructions embedded in ordinary webpages or even social media comments. This technique, known as indirect prompt injection, made it possible for attackers to smuggle commands into otherwise harmless-looking text. Source - https://lnkd.in/dMniBiym #ai #Perplexity #QA #software
To view or add a comment, sign in
-
👾 AI agents are rapidly transforming how we work, but they're also opening new doors for attackers. As Lionel Litty, Chief Security Architect at #MenloSecurity, explains: “In an adversarial setting, where an AI agent may be exposed to untrusted input, this is an explosive combination. Unfortunately, the web in 2025 is very much an adversarial setting.” Soft guardrails, like extra training or refined instructions, are usually overcome quickly. If agents are operating on the broader web, hard boundaries are essential to limit access and control actions, keeping organizations and their #browser environments secure. 🖇️ For a deeper dive into these trends and defenses, check out the latest article from Information Security Buzz at the link below.
To view or add a comment, sign in
-
🚨 The Rise of "Scamlexity": When AI Browsers Become Attack Vectors As cybersecurity professionals, we're witnessing a paradigm shift that demands our immediate attention. Recent research by Guardio Labs has unveiled a sophisticated new attack vector called PromptFix - a technique that exploits AI-powered browsers like Perplexity's Comet to execute malicious actions without user knowledge. What makes this particularly concerning: 🔍 Social Engineering 2.0: Instead of trying to "break" AI models, attackers are now leveraging the AI's core design principle - to be helpful and responsive - against itself. 🛒 Autonomous Exploitation: AI browsers can be tricked into making purchases on fake e-commerce sites, auto-filling payment details, and even clicking malicious links in phishing emails - all while the user remains unaware. 🎯 Invisible Attacks: Hidden prompts embedded in web pages can trigger actions that bypass traditional security measures, creating what researchers call "drive-by download" scenarios. Key Takeaways for Security Teams: ✅ Rethink Defense Strategies: Traditional reactive security measures aren't sufficient. We need proactive guardrails that can detect domain spoofing, malicious files, and phishing attempts in real-time. ✅ User Education Evolution: Security awareness training must now include AI-assisted browsing risks and the importance of manual verification for sensitive actions. ✅ Zero Trust for AI: Implement strict verification protocols for any AI-initiated transactions or data submissions. The emergence of "Scamlexity" (scam + complexity) represents more than just a new attack vector - it's a fundamental shift in how we need to approach cybersecurity in an AI-driven world. #CyberSecurity #AIBrowsers #PromptInjection #InfoSec #ThreatIntelligence #SecurityAwareness https://lnkd.in/g46DzSzz
To view or add a comment, sign in
-
🤖 AI browser agents like OpenAI’s Operator are some of the most exciting innovations in technology. But as they can take actions on the user’s behalf, they also inherit the user’s risks on the internet. A recent vulnerability, CVE-2025-7021 (https://lnkd.in/embZjaiP) revealed that Operator was vulnerable to a Browser-in-the-Middle attack, where attackers spoof an entire browser window using the Fullscreen API. Here's how the attack works: 🔵 The browser AI agent is given a mundane task that requires browsing. Lacking the instincts or suspicion a human might have, it mistakenly lands on a malicious page. 🔵 The site then immediately calls the Fullscreen API, which hides the real URL bar. 🔵 Fake elements like address bar, tabs, and browser icons are injected into the site to mimic a normal browser. 🔵 Focused only on completing its assigned task, the agent continues typing and clicking all inside the attacker’s counterfeit interface. This results in exposure of sensitive information, credential theft, and even financial damages, without the agent ever realizing something was wrong. Users can protect themselves against this attack by staying vigilant with best practices: 👉 Be cautious when assigning agents tasks involving logins, payments, or sensitive actions. 👉 Use dedicated browser profiles for agents with no stored sessions or credentials. 👉 Configure password managers to only work on verified domains. Defending against these attacks requires visibility inside the browser itself. Which is why traditional EDR and network tools fall short, but browser-native security can close the gap. Security solutions like SquareX can prevent this attack by disabling the fullscreen API and preventing risky actions like user input from being executed on non allowlisted sites. As autonomous browsing agents become mainstream, attackers will continue to adapt old tricks to new targets. Staying secure means rethinking defenses to meet the changing technology landscape. For more details on this attack, check out my blog here :) https://lnkd.in/e2BiABDK
To view or add a comment, sign in
-
A recent vulnerability, CVE-2025-7021, revealed that OpenAI’s Operator could be tricked by a Browser-in-the-Middle (BiTM) attack. The attack takes advantage of the fact that AI agents may not be as aware or hesitant as humans, landing on malicious attacker controlled pages through various social engineering techniques even when given mundane tasks. The malicious page triggers the Fullscreen API on opening, and injects elements like a fake url bar and tabs to make it look like a real browser. Once inside the forged environment, the task-oriented agent keeps completing actions, handing over sensitive information, credentials, and even payments directly to the attacker with no hesitation. Unfortunately, traditional tools like EDRs and SWGs lack the visibility inside the browser to stop this attack. Protections have to be deployed inside the browser where the attack is happening. Browser native security solutions like SquareX can disable the fullscreen API and prevent risky actions like user input from being executed on non-allowlisted sites, stopping the attack in its tracks. Learn more about Fullscreen BiTM: https://lnkd.in/gEEMPY-2 #cybersecurity #browsersecurity #enterprisesecurity
🤖 AI browser agents like OpenAI’s Operator are some of the most exciting innovations in technology. But as they can take actions on the user’s behalf, they also inherit the user’s risks on the internet. A recent vulnerability, CVE-2025-7021 (https://lnkd.in/embZjaiP) revealed that Operator was vulnerable to a Browser-in-the-Middle attack, where attackers spoof an entire browser window using the Fullscreen API. Here's how the attack works: 🔵 The browser AI agent is given a mundane task that requires browsing. Lacking the instincts or suspicion a human might have, it mistakenly lands on a malicious page. 🔵 The site then immediately calls the Fullscreen API, which hides the real URL bar. 🔵 Fake elements like address bar, tabs, and browser icons are injected into the site to mimic a normal browser. 🔵 Focused only on completing its assigned task, the agent continues typing and clicking all inside the attacker’s counterfeit interface. This results in exposure of sensitive information, credential theft, and even financial damages, without the agent ever realizing something was wrong. Users can protect themselves against this attack by staying vigilant with best practices: 👉 Be cautious when assigning agents tasks involving logins, payments, or sensitive actions. 👉 Use dedicated browser profiles for agents with no stored sessions or credentials. 👉 Configure password managers to only work on verified domains. Defending against these attacks requires visibility inside the browser itself. Which is why traditional EDR and network tools fall short, but browser-native security can close the gap. Security solutions like SquareX can prevent this attack by disabling the fullscreen API and preventing risky actions like user input from being executed on non allowlisted sites. As autonomous browsing agents become mainstream, attackers will continue to adapt old tricks to new targets. Staying secure means rethinking defenses to meet the changing technology landscape. For more details on this attack, check out my blog here :) https://lnkd.in/e2BiABDK
To view or add a comment, sign in
-
Nikesh Arora says agentic browsers could be banned from enterprises in 24 months and thanks to Brave’s research into Perplexity Comet, we know exactly why. Brave uncovered an indirect prompt injection where a hidden Reddit comment tricked the browser into stealing a user’s email and OTP, then sending it back in a reply. Not a hacker in a hoodie. Not a foreign adversary. A Reddit comment hidden behind a spoiler tag. That’s not an AI assistant. That’s a digital intern who will happily email your customer database to whoever asks nicely in a Reddit comment. Arora’s warning suddenly looks less like paranoia and more like common sense. So while Big Tech is racing to build these browsers, enterprise security teams are already drafting the policies to ban them. #ai #technology #cybersecurity
To view or add a comment, sign in
-
Agentic AI browsers can marvel, but it’s too early to trust them. When I dug into Guardio Labs’ Scamlexity test for Techopedia, I saw how Comet went as far as completing a fake Walmart purchase, engaging with phishing emails, and following hidden instructions planted on a webpage. It's now as if every AI tool release is a testament to how far AI companies can go to put out models that answer to our usability calls while barely sparing a thought for security. Here: https://lnkd.in/eZSQEau6
To view or add a comment, sign in
-
Let;s talk about AI browser agents-- they’re both exciting and terrifying. On paper, they’re the dream: software that can log into your accounts, click the buttons you would, and automate all the boring digital workflows we waste hours on. No more repetitive shopping carts, forms, or approvals. Just tell the agent what you want, and it gets it done. But here’s the uncomfortable truth: these agents are incredibly obedient, and dangerously gullible. Recent research by Guardio Labs showed just how brittle they are. When handed a fake Walmart phishing site and told to “buy an Apple Watch,” one AI browser *happily :D* added it to the cart, filled in the credit card number, and hit “buy.” Others followed instructions hidden in emails, acting on prompts a human would immediately flag as suspicious. Think about that. We’ve spent YEARS training employees to spot sketchy links, hover over URLs, and second-guess unexpected requests. Now, in one swoop, AI agents could undo all of that hard-won awareness. This is why some security leaders are calling AI agents the new insider threat. Not malicious, not disgruntled-- but worse in some ways: automated, tireless, and exploitable by anyone with the right prompt injection. What are some of the implications: --Nonhuman identity (NHI) management is about to become mainstream. We’ll need to know not only what users are doing in browsers, but what their AI agents are doing on their behalf --Trust boundaries are shifting-- Every time you let an AI agent act in your authenticated browser session, you’re creating a new attack surface --Policy beats enthusiasm-- It’s tempting to let agents automate critical workflows today, but without visibility, control, and guardrails, you’re giving them too much rope to hang themselves & you Don’t get me wrong-- I’m bullish on the long-term role of AI agents. They’ll eventually be more reliable, more consistent, maybe even more “security aware” than humans. But today? They’re still interns. Fast, eager, and helpful… but they’ll also click the phishing link, hand over your credentials, and tell you it’s all fine. So the real challenge isn’t whether AI browser agents are the future-- THEY ARE. The challenge is figuring out how to adopt them without undoing decades of progress in security awareness.
To view or add a comment, sign in