AI Summary Injection Turns Summaries into Malware Delivery

Researchers show attackers hide malicious payloads in HTML using CSS obfuscation and prompt overdose so AI summaries output malware instructions that lead to ransomware execution.
AI Summary Injection Turns Summaries Into Malware Delivery
Table of Contents
    Add a header to begin generating the table of contents

    Threat actors are using a new technique that hides malicious instructions in content so AI summarizers output actionable commands. CloudSEK researchers describe the method as a fresh ClickFix social-engineering proof-of-concept and say it has been used to trick users into executing self-sabotaging commands that lead to ransomware infections.

    The core idea is simple but clever: attackers embed hidden payloads inside HTML using CSS tricks and repeat the payload extensively so an AI model’s summarization process surfaces the attacker’s instructions. From a user’s point of view the source document looks harmless, but the AI-generated summary can contain attacker-controlled Windows Run instructions or other prompts that, if executed, trigger malware.

    How the Attack Works

    CloudSEK’s analysis shows the attack follows a predictable chain:

    1. An attacker crafts HTML content that looks benign when rendered for humans.
    2. The content hides prompt text in ways that are invisible to readers but still parseable by language models.
    3. The hidden payload is repeated many times (a “prompt overdose”) so it dominates the model’s context window.
    4. An automated or human-initiated summarization tool ingests the document.
    5. The AI produces a summary that contains the malicious instructions.
    6. A recipient follows the summary’s call to action and executes the command, initiating the compromise.

    CloudSEK’s Dharani Sanjaiy described the technique in a post: “Threat actors hide malicious instructions in documents using CSS obfuscation and prompt overdose. This makes the code invisible to humans but fully readable to AI models. When a user summarizes the content, the AI-generated output delivers the malicious payload, tricking the user into executing ransomware.”

    Techniques Used To Hide Payloads

    The researchers cataloged several CSS-based obfuscation methods used to conceal payloads:

    • Zero-width characters embedded in text.
    • White-on-white text rendering that is invisible to the eye.
    • Tiny font sizes that render text unreadable for humans.
    • Off-screen positioning so content is not visible in the browser viewport.

    These hidden elements remain part of the document’s text stream, which modern AI summarizers will ingest. The attacker amplifies influence over the model by repeating the payload within hidden sections, a tactic CloudSEK calls “prompt overdose.” Over many repetitions the injected instructions can dominate the summarizer’s output priorities.

    Why AI Summaries Become Attack Vectors

    Automated summarizers are designed to extract and condense the most salient content. When a malicious payload is repeated and placed in the document’s context, the model can treat that text as highly relevant. Because the obfuscation techniques make the payload invisible to human reviewers, recipients may trust the summary without checking the underlying source. CloudSEK warns this can turn an AI assistant from a passive tool into an active link in a social-engineering chain.

    The attack is not limited to manual use. If crafted content is indexed, shared, or emailed, any automated workflow that summarizes or previews the material can produce the attacker’s instructions. That makes the approach scalable: posted content can be indexed by search engines, reposted on forums, or forwarded to targets, increasing the risk that an AI will output a dangerous command.

    Where the Content can Spread and Who can be Affected

    CloudSEK notes the crafted documents can be hosted or distributed in multiple ways: as web pages, as email attachments, or in shared documents. Once published, these files can be processed by content ingestion systems, enterprise summarization services, or even personal AI assistants. Any automated pipeline that does not sanitize HTML or normalize styling may inadvertently surface hidden prompts in downstream outputs.

    Recipients across business roles are at risk if an AI-produced summary includes a call to action. CloudSEK highlights the specific danger of Windows Run-style prompts in summaries, which could lead users to run harmful commands without realizing the text came from hidden input rather than visible content.

    CloudSEK Recommendations Reported by Researchers

    To counter the technique, CloudSEK recommends several controls for organizations that use automated summarization:

    • Preprocess HTML to normalize or strip suspicious CSS attributes such as zero-width characters and off-screen positioning.
    • Employ prompt sanitizers that remove hidden or non-visible text before summarization.
    • Implement pattern recognition to flag repeated hidden payloads or “prompt overdose” signatures.
    • Enforce enterprise AI policies that gate summarization workflows and require human review for content from unknown sources.

    CloudSEK frames these actions as defensive best practices that reduce the chance malicious hidden text will reach models’ context windows or be reflected in summaries.

    CloudSEK’s research demonstrates a new ClickFix-style social-engineering proof-of-concept where CSS obfuscation and prompt overdose make malicious instructions invisible to humans yet readable to AI models. When those crafted documents are summarized, AI output can present attacker-controlled commands that recipients may follow, enabling ransomware delivery. The findings underline that automated summarization pipelines must preprocess and sanitize HTML content and that organizations should adopt detection and policy controls to mitigate this emerging risk.

    Related Posts