TLDR
Global AI-enabled fraud losses reached $12.5 billion in 2025 and are projected to hit $40 billion by 2027, according to Juniper Research and Experian analysis. That is a 220% increase in two years. The surge is driven by AI voice cloning (1 in 4 Americans targeted), AI-generated phishing (14x increase to 56% of all attacks), and deepfake video scams (Arup lost $25.6 million in a single call). ScamVerify™ tracks 8 million+ threat records across phone, email, text, website, document, and QR code channels. AI is making every category of scam more effective, and the data shows the problem is accelerating, not stabilizing.
The Growth Trajectory
| Year | Estimated AI Fraud Losses | Key Driver |
|---|---|---|
| 2023 | ~$4 billion | Early AI phishing adoption |
| 2024 | ~$8 billion | Voice cloning becomes accessible |
| 2025 | $12.5 billion | AI phishing hits 56%, deepfake attacks scale |
| 2026 | ~$22 billion (projected) | Multi-channel AI attacks, QR phishing |
| 2027 | $40 billion (projected) | Full ecosystem saturation |
The trajectory is not linear. It is accelerating. Each year, AI tools become more capable, more accessible, and cheaper to operate. The barrier to launching sophisticated scam campaigns drops while the potential payoff increases.
Why $40 Billion Is Credible
The 60% Problem
Experian's 2025-2026 fraud report found that 60% of companies experienced increased fraud losses compared to the prior year. This is not a consumer survey or projection. It is documented financial impact across businesses that track fraud losses with precision.
When 60% of businesses report increasing losses, the aggregate trend is clear: the problem is getting worse faster than defenses are improving.
The Conversion Rate Multiplier
AI does not just increase the volume of scam attempts. It dramatically increases the success rate of each attempt:
| Metric | Pre-AI | Post-AI | Impact |
|---|---|---|---|
| Phishing email success rate | ~3-5% | 56% of attacks succeed | 10x+ conversion |
| Voice scam engagement to loss | ~20% | 77% of engagers lose money | 4x conversion |
| Text scam detection by recipients | ~70% detected | Under 50% detected | 1.5x more victims |
| Average time to detect (corporate) | Hours | Days to weeks | Higher per-incident loss |
When conversion rates multiply across billions of scam attempts, the total dollar impact scales exponentially.
The Infrastructure Cost Collapse
The cost of running scam operations has dropped dramatically due to AI:
| Capability | Cost Before AI | Cost With AI |
|---|---|---|
| Generate 1,000 phishing emails | $500+ (human writers) | Under $1 (AI generation) |
| Clone a voice for scam calls | Not possible at scale | Free (open-source tools) |
| Create a phishing website | $100+ (designer/developer) | Under $10 (AI templates) |
| Personalize messages with victim data | $50+ per campaign (manual) | Under $5 (AI batch processing) |
| Operate 24/7 multi-language campaigns | $10,000+/month (staff) | Under $100/month (AI automation) |
Lower costs mean more operators, more campaigns, and more total fraud.
AI Fraud by Channel
AI enhances scam operations across every communication channel. ScamVerify tracks threats across all six:
Phone: AI Voice Cloning
1 in 4 Americans targeted. AI clones voices from as little as three seconds of audio. The Arup incident ($25.6 million from a single deepfake video call) demonstrates the upper end of per-incident damage. Seniors lose an average of $1,298 per AI voice scam, 3x more than younger adults.
For the complete analysis, see our AI deepfake voice scam report.
Email: AI-Generated Phishing
14x surge, from 4% to 56% of all attacks. AI generates grammatically perfect, personalized phishing emails that bypass both technical filters and human detection. Business email compromise (BEC) losses, already the highest-dollar fraud category, are accelerating as AI makes impersonation more convincing.
Full analysis in our AI phishing email report.
Text: AI-Written Smishing
56% of people cannot distinguish AI scam texts from real messages. AI generates unique text for each recipient, defeating carrier spam filters that rely on duplicate message detection. Personalization using breached data makes every text feel targeted and legitimate.
Details in our AI text scam analysis.
Website: AI-Generated Phishing Sites
AI can generate complete, professional phishing websites in minutes. Template-based attacks using AI-written content create convincing replicas of banking portals, payment processors, and government sites. 74,032 malicious domains are currently tracked in the URLhaus database, and the number grows daily.
QR Code: AI-Enhanced Quishing
QR code phishing surged 5x in 2025. AI assists in creating convincing email templates, generating branded QR codes, and building the phishing sites that QR codes link to. The IRS added QR phishing to its 2026 Dirty Dozen for the first time.
See our quishing explainer.
Document: AI-Generated Fake Documents
AI can generate convincing fake invoices, contracts, tax documents, and official correspondence. Document analysis is the newest frontier in AI fraud, with attacks targeting businesses through fake invoices and individuals through forged government notices.
The Arup Case Study: $25.6 Million in One Call
The Arup incident is the single most expensive documented deepfake scam:
- Attackers created AI-generated video and voice of Arup's CFO and other senior executives
- A finance department employee was invited to a video conference call
- Multiple AI-generated "executives" appeared on the call, all deepfakes
- The employee was instructed to authorize a series of wire transfers
- Total loss: $25.6 million before the fraud was detected
This case is significant not because it is typical (most AI scams cause much smaller losses) but because it demonstrates the upper bound of what AI-enabled fraud can accomplish. If AI can fool a trained finance professional into authorizing $25.6 million in transfers, it can certainly fool consumers into entering credit card numbers or wiring a few thousand dollars.
What the $40 Billion Projection Means for Consumers
More Scam Attempts Per Person
As AI reduces the cost of scam campaigns, the volume of scam contacts per person will increase. If you currently receive several scam calls and texts per week, expect that to increase.
Higher Quality Scams
Each scam contact will be more convincing. Better grammar, better personalization, more convincing voice impersonation, and more professional phishing sites. The "easy to spot" scam is disappearing.
More Channels
Scammers are expanding beyond phone and email to text, QR codes, documents, and messaging apps. Multi-channel attacks (email followed by phone call followed by text) are becoming standard.
Faster Campaigns
AI enables scammers to launch new campaigns within hours of a triggering event. A data breach, a policy change, a natural disaster, or a government announcement can be exploited the same day with AI-generated phishing tailored to the event.
How to Defend Against AI-Powered Fraud
The defense strategy must match the sophistication of the attack:
Use AI to Fight AI
Human detection of AI-generated scams is unreliable (56% of people are fooled). Tools that use AI analysis against massive threat databases provide detection that scales:
- Check phone numbers at ScamVerify against 8 million+ threat records
- Forward suspicious emails to scan@scamverify.ai for AI-powered analysis
- Check websites at ScamVerify website checker against 74,032 URLhaus domains
- Scan QR codes at ScamVerify QR scanner before visiting links
Establish Verification Protocols
Since AI can impersonate voices, faces, and writing styles, verification must rely on methods that AI cannot replicate:
- Family code words for verifying phone calls from "relatives"
- Callback verification using independently sourced phone numbers
- Multi-person authorization for financial transactions
- Out-of-band confirmation (verify an email request by phone, verify a phone request by email)
Reduce Your Attack Surface
- Limit public voice and video samples (they feed voice and face cloning)
- Opt out of data brokers to reduce personalization data available to scammers
- Use unique passwords and two-factor authentication on all accounts
- Keep software updated to close vulnerabilities that AI-generated exploits target
Report Aggressively
- Report every scam contact to the FTC, FCC, and relevant platforms. The 90-95% non-reporting rate is a gift to scammers. Every report strengthens the threat databases that defensive AI systems rely on.
Check this number now
Enter any U.S. phone number to check it against 8 million+ federal complaints and real-time carrier data.
FAQ
Is the $40 billion number just for the United States?
The Juniper Research projection is global. U.S. losses are estimated to represent approximately 30-40% of the global total, suggesting $12-16 billion in U.S.-specific AI fraud losses by 2027. However, many AI fraud operations target victims across multiple countries simultaneously, making geographic attribution difficult.
Will AI eventually make scams undetectable?
AI is making scams harder to detect through human judgment, but AI-powered detection is keeping pace in many areas. The key is that defensive AI (like ScamVerify's analysis systems) has access to databases of millions of known threats, complaint histories, and malicious domains. A single AI-generated phishing email may be convincing to a human, but when checked against 8 million+ threat records, patterns emerge that identify the attack. The arms race favors defense when the defense has data.
What is the single biggest AI fraud risk for 2027?
Deepfake voice scams targeting elderly individuals and corporate finance employees. The 77% conversion rate among engagers, combined with the emotional manipulation of voice cloning, makes this the highest-impact vector per attempt. The Arup case ($25.6 million) shows the corporate risk. The 3x senior loss multiplier ($1,298 average) shows the consumer risk.
Are government agencies doing anything about AI fraud?
The FTC has issued guidance and enforcement actions against AI-generated deceptive communications. The IRS added QR phishing to its 2026 Dirty Dozen. The FBI tracks AI fraud through IC3. However, the pace of regulation lags behind the pace of AI tool development. The most effective near-term defense is consumer and business awareness combined with AI-powered detection tools.
How accurate is the $12.5 billion 2025 figure?
It is likely an undercount. The figure is based on reported losses and documented cases. The FTC estimates that only 5-10% of fraud is reported. If the $12.5 billion represents even 30% of actual losses, the true figure for 2025 could exceed $40 billion. The $40 billion 2027 projection may itself be conservative.