ShadowLeak: Server-Side Data Theft Attack Discovered Against ChatGPT Deep Research

Follow Us on Your Favorite Podcast Platform

A groundbreaking new cyberattack dubbed ShadowLeak has been uncovered targeting ChatGPT’s Deep Research capability, marking a dangerous escalation in AI-related threats. Unlike prior exploits such as AgentFlayer and EchoLeak, which operated on the client side, ShadowLeak weaponized OpenAI’s own cloud infrastructure to silently exfiltrate sensitive data—without requiring any user interaction.

Discovered by researchers at Radware, the attack began with a specially crafted email containing hidden malicious instructions. When the AI agent processed the email as part of a legitimate research task, it was manipulated into sending stolen information directly from OpenAI’s servers to an attacker-controlled URL. Because the exfiltration request originated from a trusted server rather than the client, the malicious activity left no visible trace in the ChatGPT interface and could bypass traditional enterprise security monitoring.

The potential blast radius extended beyond Gmail, including services like Google Drive, Dropbox, Outlook, HubSpot, Notion, Microsoft Teams, and GitHub. Though OpenAI patched the vulnerability between June and August 2025, Radware cautions that the broader threat surface remains large and that more undiscovered vectors likely exist. The firm recommends continuous agent behavior monitoring as a more effective defense, focusing on aligning agent actions with user intent rather than relying solely on reactive patching.

This episode explores how ShadowLeak worked, why server-side AI vulnerabilities are uniquely dangerous, and what enterprises must do to prepare for the next wave of AI-targeted cyberattacks.

#ShadowLeak #ChatGPT #DeepResearch #OpenAI #Radware #AIsecurity #DataExfiltration #PromptInjection #AgentFlayer #EchoLeak #CyberSecurity #ServerSideAttack #AIThreats

Related Posts