Cybersecurity researchers are sounding the alarm on a new class of attacks targeting AI-powered web browsers. A growing number of malicious campaigns are exploiting the trust users place in artificial intelligence-driven interfaces—specifically, through spoofed AI sidebars in browsers like OpenAI’s Atlas and Perplexity’s Comet. These attacks undermine the integrity of the AI-assisted browsing experience by injecting deceptive user interfaces, impersonating trusted tools, and executing rogue commands.
AI Sidebar Spoofing Attacks Leverage Trust in User Interfaces
The vulnerability at the core of these attacks involves malicious or compromised browser extensions that inject JavaScript code to display a counterfeit AI sidebar. Researchers say these spoofed interfaces are visually and functionally indistinguishable from the real AI sidebars native to Atlas and Comet browsers. Once deployed, they are capable of deceiving users into:
- Visiting phishing or credential-harvesting sites
- Installing malware or running shell commands
- Authorizing OAuth access, leading to account takeovers
The spoofed sidebars are designed to manipulate users into completing sensitive actions under the illusion that guidance is coming from a trusted AI source. This represents a significant evolution in phishing tactics, merging traditional social engineering with modern generative AI technologies.
Extension-Based Attacks Are Difficult to Detect and Prevent
A key reason AI sidebar spoofing poses such a significant threat is its reliance on common extension permissions. Researchers note that these attacks often use only basic browser extension rights, similar to those of legitimate tools. As a result, these threats can evade standard permission-based security reviews.
Reports suggest this enables attackers to:
- Intercept and alter communications between the user and the AI agent interface
- Replace the AI agent’s output with malicious content
- Conduct OAuth phishing attacks that can hijack cloud-based accounts like Gmail or Google Drive
- Deploy reverse shells to establish remote control over a victim’s system
Given their subtlety, these spoofing tactics are particularly effective at targeting users who place high trust in AI-generated advice and automation features.
Fake Domains and Malicious Apps Target Popular AI Browsers
Beyond browser extensions, attackers have launched coordinated campaigns against Perplexity’s Comet browser through domain spoofing, fraudulent mobile apps, and malvertising. BforeAI’s PreCrime Labs report over 40 suspicious domains registered shortly after Comet’s launch. These fake sites often mimic legitimate download portals and rank highly on search engines thanks to aggressive search engine optimization (SEO) tactics.
Notably, attackers have also developed counterfeit apps presented as official Comet clients. These mobile applications are used to deliver spyware or entice users into clicking on dangerous ads under the guise of trusted branding.
Key Indicators of a Spoofing Campaign Include:
- App downloads from non-official stores or random third-party sites
- Inconsistent domain names or typosquatted URLs claiming to offer AI browsers
- Popups or UIs that deviate slightly in appearance or prompt unusual tasks
Users are advised to only obtain AI browsers like Atlas and Comet from verified sources and inspect any add-ons or features appearing in the browser interface.
AI Browser Agents Are Susceptible to Prompt Injection
Spoofed sidebars are just one vector in a broader set of issues plaguing AI-guided browsing. Several recent audits and demonstrations show that AI agents embedded in browsers can be exploited through prompt injection—a technique that embeds hidden instructions into web content.
Researchers at also revealed that Comet’s AI could be tricked into autonomously entering sensitive information, such as:
- Credit card details on fake e-commerce sites
- Login credentials on phishing pages
- Confirmation of fraudulent purchases
In one case, Guardio simulated a fake Walmart site where the Comet AI browser agent not only processed the transaction but encouraged the user to proceed—further eroding manual safeguards.
Meanwhile, a report from Yahoo News highlighted how even a malicious image embedded in a website could execute unauthorized functions if interpreted by Comet’s AI.
“We saw the AI agent analyze a malicious image, extract a hidden command, and execute a function without user validation,” noted Brave’s security team in its review.
Risk Mitigation for AI-Powered Browsers Requires User Diligence
Anthropic’s findings in August 2025 demonstrated that prompt-injection vulnerabilities in browser extensions like Claude could be reduced through site-level permissions and user confirmations for high-risk actions. Their mitigation measures brought the successful attack rate down from 23.6% to 11.2%—a sharp drop that underscores the value of built-in safeguards.
Until similar controls are widespread across all AI-integrated browsers, experts recommend that users:
- Avoid using AI browsers for sensitive activities such as banking, shopping, or account recovery
- Scrutinize any recommendations or actions made by AI agents
- Ensure browser extensions come only from validated and reputable developers
- Immediately revoke OAuth authorizations if suspicious activity is detected
Final Thoughts
AI sidebar spoofing is only the latest in a series of attacks illustrating the inherent risks in automating web interactions through artificial intelligence. While AI browsers like Atlas and Comet promise to simplify online tasks, they also open up new vectors for deception and abuse. As attackers hone their techniques, especially through malicious extensions and visual mimicking, the burden of verification increasingly shifts onto users.
CISOs and security teams should treat AI browsers as high-risk applications within enterprise environments and consider segmenting usage, restricting permissions, and actively monitoring for unusual OAuth or UI behaviors. The key takeaway: trust, once exploited by attackers, is not easily recovered.