FraudGPT, WormGPT, and Dark AI Models Fuel Surge in Cybercrime

Malicious AI models like FraudGPT, WormGPT, and PoisonGPT are reshaping cybercrime, enabling scalable phishing, malware generation, and disinformation. Unlike mainstream LLMs, these blackhat tools strip away safeguards, offering criminals plug-and-play capabilities that amplify social engineering attacks while lowering the technical barrier for sophisticated campaigns.
FraudGPT, WormGPT, and Dark AI Models Fuel Surge in Cybercrime
Table of Contents
    Add a header to begin generating the table of contents

    The rise of generative artificial intelligence (AI) tools like ChatGPT has been both a boon and a liability for cybersecurity. While businesses and developers harness AI for automation and innovation, threat actors increasingly exploit these same technologies—particularly tampered or custom-built large language models (LLMs)—to launch more convincing and scalable social engineering attacks. A wave of blackhat models, including FraudGPT, WormGPT, ChaosGPT, PoisonGPT, and DarkGPT, has emerged to serve the needs of cybercriminals, posing urgent questions for security practitioners.

    Cybercriminals Are Engineering AI Models Like FraudGPT and WormGPT to Enable Sophisticated Attacks

    Unlike traditional tools, these jailbroken AI models bypass ethical programming to assist in phishing, malware generation, and business email compromise schemes.

    Researchers from Cisco Talos and multiple cybersecurity teams have documented the proliferation of adversarial AI services tailored for criminal use. Unlike mainstream generative models like ChatGPT, which include guardrails to block harmful outputs, these malicious models are either tampered derivatives—or entirely new creations—explicitly designed for cybercrime.

    FraudGPT’s Capabilities Extend Well Beyond Phishing

    FraudGPT, first spotted in underground forums and Telegram channels in mid-2023, is not merely a generative text bot. It’s a blackhat toolset capable of:

    • Crafting spearphishing emails and scam pages that mimic real services
    • Penning undetectable malware and malicious code snippets
    • Exploiting leaks and identifying software vulnerabilities
    • Facilitating credit card fraud and digital impersonation

    Offered for a subscription fee ranging from $200 per month to $1,700 per year, FraudGPT provides plug-and-play capabilities to less technically inclined threat actors, significantly lowering the barrier to entry for complex attacks.

    WormGPT is a Blackhat Alternative Focused on Business Email Compromise

    WormGPT shares a similar profile but is optimized for Business Email Compromise (BEC) attacks. Known for crafting persuasive emails mimicking C-level executives or trusted vendors, WormGPT supports:

    • Dynamic generation of highly credible impersonation emails
    • Tailored phishing lures targeting financial workflows
    • Automation to support high-volume targeting at scale

    Trained on malware-related data and lacking misuse prevention mechanisms, WormGPT can convincingly bypass traditional filters used by organizations to detect fraudulent communication. Its promotion on dark web forums underscores its growing adoption among cybercriminals.

    PoisonGPT Introduces AI-Powered Disinformation

    Demonstrated as a proof-of-concept, PoisonGPT exploits LLM behavior differently. Although it behaves like a standard chatbot under most queries, it’s engineered to deliver misinformation on select topics—an emerging threat vector when combined with social engineering. Its subtlety makes it ideal for disinformation campaigns, political misinformation, and reputational attacks.

    ChatGPT is Still Being Jailbroken, Enabling Grey-Zone Exploits

    Cyber actors are not limited to black-market models. They actively jailbreak popular models like ChatGPT to force them into generating content that would otherwise be restricted. According to Cisco Talos, hackers use creative inputs—ranging from foreign languages to emoji-based prompts—to bypass ethical constraints and produce malicious outputs, including:

    • Spam and phishing templates
    • Malware scripts
    • Disinformation content

    These jailbreaks highlight a pressing challenge for the AI safety community: current content guardrails remain manipulatable through prompt engineering.

    AI Is a Force Multiplier, Not a New Attack Vector

    Experts emphasize that these malicious models don’t necessarily invent new types of cyberattacks. Rather, they amplify existing ones by making them:

    • Faster to assemble
    • Harder to detect
    • Easier to personalize

    Simply put, malicious LLMs are a “force multiplier”—increasing the scalability and effectiveness of spearphishing, disinformation, and impersonation-based tactics.

    The Broader Landscape: AI-Enhanced Attacks Amid Geopolitical Cyber Conflict

    The adaptation of generative AI in cyberattacks coincides with heightened cyber conflict between nations. A 700% spike in Iranian cyberattacks on Israeli entities since June 2025 exemplifies how social engineering and digital propaganda are being weaponized alongside kinetic conflict. American infrastructure is under watch, as experts anticipate spillover effects.

    Meanwhile, threat groups like Scattered Spider, known for advanced social engineering, are allegedly redirecting efforts toward U.S. insurance companies—with support from generative AI tools. Google’s threat intelligence unit has linked some of these operations to Russian affiliates, highlighting the possible convergence of state-affiliated hacking and AI-driven threats.

    Defending Against AI in Social Engineering Requires Multi-Layered Countermeasures

    To combat the rise of AI in social engineering attacks, a layered approach is critical. The research published on arXiv and other sources outlines key defense strategies, including:

    1. Technical Controls : Deploy anti-malware, email filtering, and threat detection solutions powered by machine learning to identify AI-generated attack vectors.
    2. Human Vigilance : Train employees to detect social engineering red flags, even when content appears unusually well-crafted.
    3. Collaborative Defense : Information sharing between governments, academia, and private industry is essential to tracking the evolution of blackhat AI tools.
    4. Regulatory Action : Advocacy for AI regulation that includes oversight on LLM distribution, model transparency, and accountability for misuse.

    The coexistence of beneficial and malicious AI models underlines the dual-use dilemma of modern machine learning. As FraudGPT, WormGPT, and similar models become more accessible, defenders must prepare not only for more frequent attacks—but for more emotionally manipulative, technically precise, and socially engineered ones.

    In the face of escalating threats, both human defenders and AI-powered security tools must evolve in parallel with the adversaries they’re designed to counter.

    Related Posts