Menu
Dark room with multiple computer monitors displaying data
Data ReportsMarch 21, 2026- Leo

AI-Generated Phishing Emails: The 14x Surge That Bypasses Filters

TLDR

AI-generated phishing emails have surged 14x in three years, jumping from 4% to 56% of all reported phishing attacks. This is not an incremental change. It is a fundamental shift in how email threats are created and delivered. Traditional email filters that rely on grammar errors, keyword matching, and known sender blocklists are failing against AI-crafted messages that are grammatically perfect, brand-accurate, and individually personalized. ScamVerify™ analyzes suspicious emails against 8 million+ threat records, including 74,032 malicious domains, 60,758 threat indicators, and 684,045 FTC impersonation complaints to catch what filters miss.

The Numbers Behind the Surge

The data comes from multiple industry sources tracking phishing evolution over the past three years:

MetricValueSource
AI phishing share (2023)4% of reported attacksSlashNext
AI phishing share (2026)56% of reported attacksSlashNext
Growth factor14x increaseSlashNext
Phishing emails sent daily3.4 billionCybersecurity Ventures
BEC wire request average$24,600Abnormal Security
BEC wire attack surge61% YoY increaseAbnormal Security
Fraud losses (2025)$12.5 billionFBI IC3
Projected losses (2027)$40 billionJuniper Research
Holiday season phishing spike550% increaseMultiple sources

The holiday season spike is especially revealing. During November and December 2025, phishing volume increased by 550%. AI tools made it possible to generate the volume of unique, brand-matched emails needed to exploit seasonal shopping behavior at scale. Every email was different enough to evade pattern detection, yet convincing enough to drive clicks.

Why Email Filters Are Failing

Traditional email security relies on signals that AI-generated phishing has neutralized:

Signal 1: Grammar and Spelling Errors

Before AI: Phishing emails contained misspellings, awkward phrasing, and incorrect punctuation. Filters flagged these patterns. Recipients spotted them visually.

After AI: Large language models produce grammatically flawless text. AI-generated phishing emails read better than many legitimate corporate communications. The grammar signal is dead.

Signal 2: Known Malicious Domains

Before AI: Attackers reused domains across campaigns. Once a domain was blocklisted, the campaign died.

After AI: Attackers generate new domains for every campaign, register them with proper SPF/DKIM/DMARC authentication, and discard them within 48 hours. Domain blocklists cannot keep pace.

Signal 3: Bulk Message Detection

Before AI: Phishing campaigns sent identical or nearly identical messages to thousands of recipients. Filters detected the bulk pattern.

After AI: AI generates unique text for every recipient. No two messages are identical. Bulk detection algorithms cannot identify the campaign as coordinated.

Signal 4: Content Keyword Matching

Before AI: Phishing emails used predictable phrases ("verify your account," "click here immediately"). Keyword filters caught these patterns.

After AI: AI varies language naturally. "Verify your account" becomes "confirm your information," "update your details," or "complete your security review." The meaning is identical, but the keywords change with every message.

Filter MethodPre-AI EffectivenessPost-AI Effectiveness
Grammar analysisHighNear zero
Domain blocklistModerateLow (domains rotate hourly)
Bulk detectionHighLow (unique per recipient)
Keyword matchingModerateLow (synonyms and rephrasing)
Sender reputationModerateLow (new domains pass checks)
Authentication (SPF/DKIM)High for spoofingIneffective (attackers configure properly)

The AI Phishing Playbook

Modern AI phishing campaigns follow a repeatable process:

Phase 1: Reconnaissance

Attackers use AI to scrape and analyze public information about targets: LinkedIn profiles, company websites, press releases, SEC filings, and social media. AI synthesizes this data into target profiles that inform personalization.

Phase 2: Template Generation

Large language models generate dozens of email variants for each campaign type. The AI adjusts tone (formal for financial institutions, casual for retail brands), formatting (matching the target brand's actual email templates), and urgency level (measured, not panicked).

Phase 3: Personalization at Scale

AI merges individual target data with templates. Each recipient gets an email that references their name, company, role, and sometimes recent activity. Data breach records fuel this personalization. The 2024 National Public Data breach alone exposed 2.9 billion records.

Phase 4: Infrastructure Setup

New domains are registered, authenticated with SPF/DKIM/DMARC, and aged briefly to build baseline reputation. Some attackers compromise legitimate accounts to send from trusted infrastructure, as seen in DocuSign account abuse campaigns.

Phase 5: Delivery and Rotation

Emails are sent in small batches to avoid triggering volume-based alerts. Domains rotate every 24 to 48 hours. By the time a domain is reported and blocklisted, the campaign has already moved to a new one.

What ScamVerify Sees in the Data

ScamVerify's threat intelligence reveals the infrastructure behind AI phishing campaigns:

Data PointCountWhat It Tells Us
URLhaus malicious domains74,032Active phishing infrastructure, .com dominates at 81%
ThreatFox IOCs60,758Malware delivery endpoints, C2 servers, credential harvesters
FTC impersonation complaints684,045Most-impersonated brands and agencies
FTC total records6.2 million+Complaint patterns revealing campaign timing and targets
FCC fraud reports445,000+Cross-channel attack patterns (email + phone + text)

The convergence of these datasets is what makes AI phishing detectable even when individual signals fail. A malicious domain may be brand new, but the IP address it resolves to may already appear in ThreatFox. The impersonated brand may match a spike in FTC complaints. The linked URL may share infrastructure with domains already in URLhaus.

The BEC Connection: $24,600 Per Wire Request

Business Email Compromise represents the highest-dollar application of AI phishing. When AI generates a convincing email from a "CEO" to a finance department employee requesting a wire transfer, the average request is $24,600. BEC wire-focused attacks rose 61% year over year according to Abnormal Security's 2026 report.

AI makes BEC more dangerous in three specific ways:

  1. Perfect voice matching. AI analyzes a CEO's previous emails and replicates their writing style, tone, and typical phrases.
  2. Timing precision. AI monitors email patterns to send the fraudulent request during a payment cycle when wire transfers are routine.
  3. Thread hijacking. When attackers compromise an email account, AI can insert itself into existing conversation threads, making the fraudulent request appear as a natural continuation of a real discussion.

For a deeper analysis of BEC attack patterns and defenses, see our Business Email Compromise guide.

How to Protect Yourself When Filters Fail

Since email filters alone are insufficient, layered defense is essential:

1. Use AI-Powered Analysis

Forward suspicious emails to scan@scamverify.ai for analysis that goes beyond what inbox filters check. ScamVerify cross-references the email against 8 million+ threat records, checking links against URLhaus, sender patterns against FTC data, and content against known manipulation tactics.

2. Verify Through Independent Channels

Any email requesting action (clicking a link, making a payment, providing information) should be verified through a channel not mentioned in the email. Call the sender using a number you already have. Visit the company's website by typing the URL directly.

3. Check Before You Click

Paste suspicious email content at the ScamVerify email checker before interacting with any links. The analysis takes seconds and can prevent credential theft or malware installation.

4. Assume Sophistication

The old heuristics are obsolete. Perfect grammar does not mean an email is safe. A matching brand template does not mean it is legitimate. Proper email authentication does not mean the sender is who they claim to be. Start from a position of skepticism for any unexpected email.

5. Report Everything

Forward phishing emails to scan@scamverify.ai, reportphishing@apwg.org, and your IT department. Every report feeds the threat intelligence databases that detection systems rely on. AI phishing at scale requires AI detection at scale, and detection improves with more data.

Check a suspicious email

Paste email content below, or forward it to scan@scamverify.ai for instant analysis.

Or forward suspicious emails to scan@scamverify.ai for instant analysis.

FAQ

Why did AI phishing jump from 4% to 56% so quickly?

The barrier to entry collapsed. Free and open-source language models became widely available in 2023 and 2024. Scammers who previously needed to write emails manually could suddenly generate thousands of unique, polished variants in minutes. The 14x surge reflects the adoption curve of these tools across criminal operations globally.

Can my email provider's AI filter catch AI-generated phishing?

Some providers are deploying AI-powered defenses, but the detection side consistently lags behind the attack side. AI phishing evolves daily with new templates, domains, and personalization techniques. Email provider filters catch a portion of AI phishing, but the 56% statistic specifically measures attacks that successfully bypassed existing filters. Supplementary tools like ScamVerify provide an additional layer.

Are AI phishing emails targeting individuals or businesses more?

Both, but the attack types differ. Individual targeting focuses on credential theft (fake login pages for banks, email providers, and streaming services). Business targeting focuses on BEC with wire transfer requests averaging $24,600. The 61% year-over-year rise in BEC wire attacks suggests businesses are increasingly the higher-value target.

How does ScamVerify detect AI-generated phishing when filters cannot?

ScamVerify uses a fundamentally different approach. Instead of relying on the email's surface characteristics (grammar, keywords, sender reputation), ScamVerify cross-references the email's links, domains, sender patterns, and content against 8 million+ records from multiple threat intelligence sources. A new phishing domain may evade a blocklist, but its infrastructure often connects to known malicious networks already tracked in URLhaus or ThreatFox.

Will AI phishing get worse?

Current trajectory suggests yes. Language models are becoming more capable, personalization data from breaches keeps growing, and the economics favor attackers (low cost to generate emails, high return per successful attack). The fraud loss projection of $40 billion by 2027 reflects this trend. The defense strategy must shift from trying to block AI phishing at the filter level to empowering recipients with AI-powered verification tools.


Got a suspicious email? Forward it to scan@scamverify.ai or check it at the ScamVerify email checker for instant analysis against 8 million+ threat records.

Photo by Unknown on Unsplash

Check any phone number, website, text, email, document, or QR code for free.

Instant AI analysis backed by millions of federal records and real-time threat data.

Check Now