Emerging agentic AI browsers are vulnerable to both new and old security threats like phishing and prompt injection. A study on Perplexity’s Comet revealed that these tools, which can autonomously perform online tasks, lack sufficient security safeguards and can be easily manipulated to interact with malicious pages, putting user data at risk.
One of the most alarming vulnerabilities discovered was the ease with which these AI browsers can be tricked into interacting with malicious websites. Guardio tested a scenario where Comet was directed to a fake Walmart website created by the researchers. The AI agent, without verifying the site’s legitimacy, proceeded to navigate to the checkout page, autofill credit card and address information, and complete a purchase. In a real-world setting, an AI agent could be led to such a site through common techniques like SEO poisoning and malvertising. This demonstrates that the AI’s autonomous nature, meant to simplify tasks, can become a liability when it fails to distinguish between genuine and fraudulent sites.
The Guardio study also confirmed that agentic AI browsers are vulnerable to traditional cyberattacks like phishing. In one test, researchers sent a fake Wells Fargo email to Comet from a ProtonMail address. The AI agent misinterpreted the communication as a legitimate instruction from the bank, clicked a phishing link, and loaded a fake login page. It then prompted the user to enter their credentials, effectively acting as a conduit for a phishing scam. This highlights a critical flaw: the AI’s inability to recognize suspicious senders or links, a basic security practice that humans are often trained to follow.
Beyond phishing, the study uncovered a new type of threat specific to these AI agents: prompt injection. This attack involves embedding malicious instructions within seemingly benign elements of a webpage. Guardio created a fake CAPTCHA page that hid commands for the AI agent within its source code. Comet, upon encountering the page, interpreted these hidden instructions as valid commands and clicked a button that triggered a malicious file download. This type of attack is particularly insidious because it leverages the AI’s own programming against it, turning a simple webpage interaction into a security breach.
The findings from Guardio’s research serve as a stark warning about the current state of security in agentic AI browsers. While these tools are still in their early stages, their rapid adoption necessitates a greater focus on robust security safeguards. The convenience of an autonomous browser must not come at the cost of user safety. Developers of these platforms, including Perplexity, Microsoft, and OpenAI, must prioritize building in stronger protections against both classic and novel attack vectors, such as improved site legitimacy verification, smarter handling of suspicious communications, and better defenses against prompt injection. Without these measures, the promise of a seamless, automated browsing experience could be overshadowed by the risk of widespread security breaches.
Reference: