Multiple state-sponsored groups are using Google’s Gemini AI assistant. They use it primarily for productivity improvements. However, they also use it for reconnaissance and attack planning.
Google’s Threat Intelligence Group (GTIG) has observed government-linked advanced persistent threat (APT) groups from over 20 countries. Iran and China showed the most significant activity.
This confirms a trend of hackers abusing AI tools to reduce attack preparation time and represents a concerning trend of AI attacks.
The most common uses include:
- Coding assistance for tools and scripts
- Vulnerability research
- Technology explanations and translations
- Target organization research
- Methods to evade detection and escalate privileges within compromised networks
APT Activities with Gemini AI
Iranian threat actors heavily utilized Gemini for:
- Reconnaissance on defense organizations and experts
- Vulnerability research
- Phishing campaign development
- Influence operations content creation
- Translation and technical explanations related to cybersecurity and military technologies (including UAVs and missile defense systems)
China-backed groups primarily used Gemini for:
- Reconnaissance on U.S. military and government organizations
- Vulnerability research
- Scripting for lateral movement and privilege escalation
- Post-compromise activities (evading detection and maintaining network persistence)
- Exploring access to Microsoft Exchange using password hashes and reverse-engineering security tools like Carbon Black EDR
North Korean APTs used Gemini to support multiple phases of the attack lifecycle, including:
- Researching free hosting providers
- Conducting target organization reconnaissance
- Assisting with malware development and evasion techniques
- Supporting their clandestine IT worker scheme (drafting job applications and proposals to secure employment in Western companies under false identities)
Russian threat actors showed minimal engagement, mostly focusing on:
- Scripting assistance
- Translation
- Payload crafting (including rewriting publicly available malware, adding encryption, and understanding malware functions)
Attempts to jailbreak Gemini or bypass its security measures were observed but reportedly unsuccessful. This mirrors a similar disclosure by OpenAI regarding ChatGPT in October 2024, highlighting the widespread misuse of generative AI tools by threat actors.
The lack of robust protections in some AI models, including those with easily bypassed restrictions, is a growing concern.
The subpar security of models like DeepSeek R1 and Alibaba’s Qwen 2.5, vulnerable to prompt injection attacks, further underscores this risk.
Unit 42 researchers demonstrated effective jailbreaking techniques against DeepSeek R1 and V3, highlighting the ease of abuse for malicious purposes. These AI attacks are a serious threat.