Palo Alto Networks’ Unit 42 warns that threat actors are increasingly using generative AI to build more realistic and harder-to-detect phishing attacks. New research from Unit 42 shows adversaries are combining AI website builders, writing assistants, deepfakes, and chatbots to automate large-scale campaigns that closely mimic trusted brands and services.
“Adversaries are increasingly leveraging GenAI to create realistic phishing content, clone trusted brands, and automate large-scale deployment using services like low-code site builders,” Unit 42 researchers said. The team notes that AI-driven misuse has risen sharply and is already producing convincing attack assets.
Unit 42 Shows Rapid Rise in GenAI Use by Threat Actors
Unit 42’s analysis finds AI adoption among attackers has more than doubled within six months. While current AI-generated phishing is described as “relatively rudimentary,” researchers expect it to grow more convincing as AI website builders and content tools gain power and fewer guardrails.
The report highlights multiple abuse patterns now in use: AI-generated phishing pages and URLs, writing-assistant misuse to craft phishing copy, AI-driven deepfakes, and malicious chatbots that engage victims in real time.
How Attackers Are Using AI Website Builders and Writing Assistants
Unit 42 tested a popular AI website builder to see how quickly an attacker could generate a believable corporate site. The team said the platform produced a close replica of Palo Alto Networks’ site in roughly 60 seconds. Builders accept a short company description, then expand that into a full prompt that generates images, text, and a publishable index page with product descriptions and links.
The key weakness: many builders allow site creation and publication without email or phone verification. That lack of verification makes it trivial for an attacker to publish a site that impersonates a legitimate business or organization.
Third-party AI writing assistants are also being repurposed to create realistic phishing URLs and email copy. Unit 42 observed simple workflows where an attacker uses an AI writer to generate a phishing landing page text, hosts it on a veneer domain, and then sends targeted emails to victims.
Deep Phishing Test Mimics Palo Alto Site
In the team’s proof-of-concept, a generated site contained believable pages for next-generation firewalls, cloud security, and threat intelligence. The site required minimal manual work to appear legitimate. Unit 42 flagged how these tools can quickly produce the visual and textual elements defenders use to validate authenticity.
The researchers also showed how an initial prompt can be programmatically enhanced into a complete site prompt, which the builder then turns into copy, layout, and images. That speed and automation let attackers scale deep-phishing efforts with little effort.
Observed Misuse Rates Across AI Services
Unit 42 measured which AI offerings are most abused and reported the following approximate rates:
- About 40% of misuse involves AI website generators
- Roughly 30% involves AI writing assistants
- Close to 11% involves AI-powered chatbots
These percentages reflect observed campaigns and underscore that website generators currently present the largest immediate risk for scalable, convincing phishing.
Real-World Phishing URLs and Attack Flow Observed by Unit 42
Unit 42 shared examples where recipients received an email stating they had “new documents to view.” Clicking the link redirected users to an AI-generated phishing page—a convincing fake Microsoft site in the observed cases—designed to harvest credentials. In some workflows, the phishing page is hosted on the same AI platform; in others, attackers host content on legitimate hosting providers to avoid suspicion.
Palo Alto Recommends Advanced URL Filtering and DNS Security
To counter AI-generated phishing sites and malicious URLs, Unit 42 recommends enterprises rely on advanced URL filtering and DNS security to detect and block known malicious locations. The research stresses that as AI site builders and content tools become easier to use, defenders must adapt controls that identify suspicious domains and automated site creation.
“Within just six months, AI use has more than doubled and continues to grow steadily,” the Unit 42 report states. The team concludes that while current AI-assisted phishing is primitive in many cases, the trend points toward more convincing, large-scale attacks as the ecosystem matures.