State-Sponsored Hackers Abuse Google’s Gemini AI for Attacks

State-Sponsored Hackers Abuse Google's Gemini AI for Attacks
Table of Contents
    Add a header to begin generating the table of contents

    Multiple state-sponsored groups are using Google’s Gemini AI assistant. They use it primarily for productivity improvements. However, they also use it for reconnaissance and attack planning.

    This confirms a trend of hackers abusing AI tools to reduce attack preparation time and represents a concerning trend of AI attacks.

    The most common uses include:

    • Coding assistance for tools and scripts
    • Vulnerability research
    • Technology explanations and translations
    • Target organization research
    • Methods to evade detection and escalate privileges within compromised networks

    APT Activities with Gemini AI

    Iranian threat actors heavily utilized Gemini for:

    • Reconnaissance on defense organizations and experts
    • Vulnerability research
    • Phishing campaign development
    • Influence operations content creation
    • Translation and technical explanations related to cybersecurity and military technologies (including UAVs and missile defense systems)

    China-backed groups primarily used Gemini for:

    • Reconnaissance on U.S. military and government organizations
    • Vulnerability research
    • Scripting for lateral movement and privilege escalation
    • Post-compromise activities (evading detection and maintaining network persistence)
    • Exploring access to Microsoft Exchange using password hashes and reverse-engineering security tools like Carbon Black EDR

    North Korean APTs used Gemini to support multiple phases of the attack lifecycle, including:

    • Researching free hosting providers
    • Conducting target organization reconnaissance
    • Assisting with malware development and evasion techniques
    • Supporting their clandestine IT worker scheme (drafting job applications and proposals to secure employment in Western companies under false identities)

    Russian threat actors showed minimal engagement, mostly focusing on:

    • Scripting assistance
    • Translation
    • Payload crafting (including rewriting publicly available malware, adding encryption, and understanding malware functions)

    Attempts to jailbreak Gemini or bypass its security measures were observed but reportedly unsuccessful. This mirrors a similar disclosure by OpenAI regarding ChatGPT in October 2024, highlighting the widespread misuse of generative AI tools by threat actors.

    The lack of robust protections in some AI models, including those with easily bypassed restrictions, is a growing concern.

    Related Posts