Three separate security research teams disclosed distinct attack vectors targeting Claude AI development tooling within a 48-hour window, revealing a one-click remote code execution vulnerability, a silent OAuth token hijacking method targeting Model Context Protocol integrations, and a prompt injection flaw in the Claude Chrome extension — a convergence of findings that exposes systemic security gaps in enterprise AI development tool adoption.
Three Vulnerabilities, One 48-Hour Window: An Unusual Simultaneous Disclosure
Researchers from Adversa AI, Mitiga, and a third team each independently identified and disclosed separate security vulnerabilities in Claude-based developer tooling between May 7 and 8, 2026. While the vulnerabilities are unrelated in their technical mechanisms, their simultaneous disclosure creates a compound risk picture for developers and organizations that have integrated Claude Code, Model Context Protocol, and Claude browser extensions into their workflows.
None of the three vulnerabilities have been assigned formal CVE identifiers as of the disclosure date.
Adversa AI: One-Click RCE in Claude Code via Malicious Repository Files
Adversa AI researchers disclosed a one-click remote code execution vulnerability in Claude Code and other AI CLI tools. The attack path requires an attacker to place specially crafted malicious files inside a code repository. When a user subsequently approves folder access — a routine step when using Claude Code to analyze a project — the malicious repository content triggers code execution on the user’s host machine.
The attack is effective because approving folder access to a repository is an expected, routine action for Claude Code users; the security implication of that approval is not intuitive to most users. Anthropic’s initial response characterized the issue as user behavior — that users “shouldn’t have clicked OK” — a position that places responsibility on end users rather than the software to enforce tighter trust boundaries around repository file processing.
Mitiga: Silent MCP OAuth Token Hijacking Enables Persistent Unauthorized Access
Mitiga researchers demonstrated that attackers can intercept and redirect Claude Code’s Model Context Protocol (MCP) traffic to silently capture OAuth tokens — without triggering any visible alerts or warnings to the user.
How Stealthy MCP OAuth Redirection Works
MCP is an emerging protocol enabling AI assistants like Claude to interact with external tools and services. When Claude Code authenticates to MCP-connected services using OAuth, that authentication flow can be intercepted by an attacker who has positioned themselves to redirect MCP traffic. The intercepted OAuth tokens grant the attacker persistent access to the connected services — potentially including source code repositories, cloud platform APIs, and internal developer tooling — without the victim ever receiving an indication that the authentication was compromised.
The stealthy nature of this attack vector means victims may not discover unauthorized access until they observe downstream consequences: unexpected access to their repositories, unauthorized API calls, or data exfiltration from services connected through MCP.
Chrome Extension Prompt Injection Enables AI Agent Session Takeover
A third vulnerability, reported through SecurityWeek, affects the Claude Chrome extension and allows attackers to inject unauthorized prompts into AI agent sessions. The flaw stems from lax extension permissions and insufficient trust boundaries in how the extension processes content from web pages.
Prompt Injection as a Browser-Level Attack Surface
When a Claude Chrome extension session processes attacker-controlled web content, that content can include instructions that the AI agent executes as if they were legitimate user commands. This prompt injection attack can redirect the AI agent’s actions, exfiltrate information visible to the session, or cause the AI agent to perform unintended operations within the scope of the user’s connected accounts and tools.
AI agent browser extensions are a rapidly growing deployment model, and prompt injection through web content represents an attack surface that is fundamentally new to enterprise security teams that are accustomed to evaluating browser extension risks in terms of JavaScript execution and permission scope — not adversarial instruction injection.
A Systemic Security Maturity Crisis in Enterprise AI Tooling Adoption
The convergence of three separate attack surface disclosures against Claude developer tooling in 48 hours reflects a broader pattern in AI tool security: adoption has substantially outpaced threat modeling. The Model Context Protocol, AI CLI tools with repository access, and AI-integrated browser extensions are all categories of software that have reached significant enterprise deployment without the years of security research, red-teaming, and hardening that comparable developer infrastructure typically receives before widespread adoption.
Assessing Claude Code, MCP OAuth, and Chrome Extension Exposure
Developers using Claude Code, MCP integrations, or the Claude Chrome extension should monitor Anthropic’s security advisories for patches addressing the disclosed vulnerabilities. Organizations should assess their exposure to each of the three attack vectors: whether employees are using Claude Code with broad repository folder access approvals, whether MCP integrations are exposed to potential traffic redirection, and whether the Chrome extension is deployed with access to sensitive internal web applications.
Security teams incorporating AI development tools into their enterprise security posture should evaluate these tools against the same trust boundary and attack surface standards they apply to any developer toolchain component — including reviewing the permissions, network access, and external integrations that AI CLI tools and extensions carry into the enterprise environment.
