Google Patches Gemini CLI Vulnerability That Enabled Silent Code Execution and Data Theft

A critical flaw in Google’s Gemini CLI exposed developers to silent command execution and data theft through poisoned context files, prompting an urgent security patch.
Google Patches Gemini CLI Vulnerability That Enabled Silent Code Execution and Data Theft
Table of Contents
    Add a header to begin generating the table of contents

    A severe security flaw in Google’s Gemini CLI—a command-line AI coding assistant—allowed attackers to execute hidden commands and exfiltrate data from developers’ systems, according to a disclosure by cybersecurity firm Tracebit.

    Discovered just two days after Gemini CLI’s public release on June 25, the vulnerability was reported to Google on June 27. A fix was issued in version 0.1.14 of the tool, released on July 25.

    Gemini CLI is designed to assist developers by loading local project files into a contextual prompt and allowing natural language interactions with Google’s Gemini AI. The tool supports tasks such as writing code, offering suggestions, and even executing commands locally. It does so either by asking for user confirmation or by auto-running allowlisted commands.

    However, Tracebit researchers identified a flaw that allowed attackers to bypass the confirmation mechanism by exploiting how Gemini CLI processes context files like README.md and GEMINI.md. These files are automatically ingested to provide AI context about the codebase.

    By embedding malicious instructions inside such files, attackers could perform prompt injection attacks. When combined with weak command parsing and an overly permissive allowlist system, this method enabled undetectable code execution.

    Tracebit demonstrated the attack using a proof-of-concept (PoC) that involved:

    • A seemingly safe Python script
    • A poisoned README.md file containing injected instructions
    • A gemini scan command that reads context files

    In the PoC, Gemini CLI was instructed to run a harmless command like:

    nginxCopyEditgrep ^Setup README.md;
    

    However, after the semicolon, a second malicious command was appended—one that silently exfiltrated the user’s environment variables to an external server. Because the command string began with grep, and grep was likely on the allowlist, Gemini treated the entire command as trusted and executed it without prompting the user.

    “For the purposes of comparison to the whitelist, Gemini would consider this to be a ‘grep’ command, and execute it without asking the user again,” said Tracebit.

    “In reality, this is a grep command followed by a command to silently exfiltrate all the user’s environment variables (possibly containing secrets) to a remote server.”

    The danger was amplified by Gemini CLI’s output formatting. Malicious commands could be visually obfuscated using whitespace, preventing users from detecting suspicious activity in their terminal.

    Tracebit noted that while the exploit requires specific conditions—like pre-approved commands on the allowlist—a determined attacker could replicate the method under realistic scenarios. The company tested similar attacks on other agentic AI coding tools, including OpenAI Codex and Anthropic Claude, but found those platforms used more robust allowlisting techniques that prevented exploitation.

    Google has since addressed the vulnerability in Gemini CLI version 0.1.14. Users are strongly advised to update immediately and avoid scanning unfamiliar or untrusted codebases outside of sandboxed environments.

    This incident adds to the growing list of concerns around AI-powered development tools, particularly those with agent-like behavior capable of taking action based on contextual input.

    Related Posts