LameHug Malware Uses AI-Powered Language Model to Launch Dynamic Windows Data Theft

LameHug malware uses an AI language model to craft system commands on the fly, targeting Windows machines in attacks linked to Russian-backed APT28.
LameHug Malware Uses AI-Powered Language Model to Launch Dynamic Windows Data Theft
Table of Contents
    Add a header to begin generating the table of contents

    A new strain of malware called LameHug is pushing cyberattacks into uncharted territory by using an AI-powered language model to craft malicious Windows commands in real-time. The malware, discovered by Ukraine’s CERT-UA, has been linked to the Russian state-sponsored group APT28, also known as Fancy Bear, STRONTIUM, and several other aliases.

    The malware is written in Python and communicates with a large language model via the Hugging Face API. Specifically, it uses the Qwen 2.5-Coder-32B-Instruct, an open-source code-generation model developed by Alibaba Cloud, designed to translate natural language into executable scripts and shell commands.

    “LameHug was discovered after reports of phishing emails sent on July 10 from compromised accounts impersonating ministry officials,”
    —CERT-UA

    These emails were intended for government agencies and carried ZIP file attachments containing one of several identified malware loaders, including:

    • Attachment.pif
    • AI_generator_uncensored_Canvas_PRO_v0.9.exe
    • image.py

    Once deployed, LameHug dynamically generates and executes system reconnaissance and data-theft commands, using real-time prompts to the language model. These commands include:

    • Collecting system details and saving them to info.txt
    • Recursively searching Documents, Desktop, and Downloads for files
    • Exfiltrating collected data via SFTP or HTTP POST

    CERT-UA has attributed this activity to APT28 with medium confidence.

    LameHug appears to be the first publicly known malware that integrates LLM capabilities directly into its operation, allowing it to adapt and evolve during the intrusion without requiring a new payload.

    This approach introduces a more flexible method of attack. It reduces the need for hardcoded commands, which makes static analysis and traditional malware detection much more difficult. By outsourcing command generation to a remote model, the malware increases stealth and may evade conventional cybersecurity tools.

    The malware also uses Hugging Face’s infrastructure as part of its command-and-control flow, potentially helping it blend in with benign API traffic and extend dwell time.

    CERT-UA did not confirm whether the AI-generated commands were fully executed as intended or whether data exfiltration attempts succeeded.

    With its AI-powered, code-generating core, LameHug may signal a new generation of adaptive malware designed to outpace static defenses and evolve on demand inside compromised networks.

    Related Posts