Menu
Microphone with Speak Up text etched on it
Data ReportsMarch 21, 2026- Leo

1 in 4 Americans Hit by AI Deepfake Voice Scams in 2026

TLDR

McAfee research found that 1 in 4 Americans (25%) has been targeted by an AI deepfake voice scam, and 77% of those who engaged with the scam lost money. Adults over 55 lose an average of $1,298 per incident, three times the losses of younger adults. Only 24% of people can reliably distinguish an AI-cloned voice from a real person. Total AI-enabled fraud losses hit $12.5 billion in 2025 and are projected to reach $40 billion by 2027. A single deepfake voice call cost engineering firm Arup $25.6 million. ScamVerify™ tracks 8 million+ threat records across phone, text, email, website, document, and QR code channels to help identify the numbers and infrastructure behind these campaigns.

The Numbers: AI Voice Scams by the Data

MetricValueSource
Americans targeted by AI voice scams1 in 4 (25%)McAfee Global AI Scam Survey
Victims who lost money (among engagers)77%McAfee
Average loss, seniors (55+)$1,298McAfee
Average loss, younger adults~$400McAfee
Can distinguish AI voice from real24%Various studies
Global AI fraud losses (2025)$12.5 billionFBI IC3 / Experian
Projected AI fraud losses (2027)$40 billionJuniper Research
Largest single deepfake loss$25.6 million (Arup)News reports
ScamVerify threat records8 million+ScamVerify database

These are not hypothetical projections. These are documented incidents affecting millions of Americans right now.

How AI Voice Cloning Works

The technology behind deepfake voice scams has become alarmingly accessible:

Three Seconds of Audio Is Enough

Modern AI voice cloning tools can create a convincing replica of someone's voice from as little as three seconds of audio. A voicemail greeting, a social media video, a podcast appearance, or a phone call recording provides sufficient source material.

The Cloning Process

  1. Audio capture: Scammer obtains a sample of the target's voice (social media, voicemail, public recordings, or a brief phone call)
  2. Model training: AI tools analyze pitch, cadence, accent, speech patterns, and vocal characteristics
  3. Voice synthesis: The AI generates new speech in the cloned voice, saying whatever text the scammer inputs
  4. Real-time capability: Advanced tools can clone voices in real time during a phone call, allowing interactive conversation in someone else's voice

The barrier to entry has collapsed. Free and low-cost voice cloning tools are available online. No technical expertise is required. A scammer with a laptop and a voice sample can produce a convincing clone in minutes.

Why 77% of Engagers Lose Money

The 77% conversion rate among those who engage with AI voice scams is extraordinarily high compared to traditional scams. Several factors explain this:

Emotional override. Hearing a loved one's voice (or what sounds like it) triggers an emotional response that bypasses rational thinking. When your "grandchild" calls crying for help, the emotional urgency overrides the skepticism you would apply to a text or email.

Voice as identity proof. Humans use voice recognition as a primary identity verification method. "I know it was them because I recognized their voice" has been a reliable heuristic for centuries. AI voice cloning breaks this fundamental trust mechanism.

Real-time interaction. Unlike phishing emails or scam texts that give victims time to think, a deepfake voice call creates real-time pressure. The scammer can respond to questions, express emotion, and adapt the conversation, maintaining the illusion throughout the interaction.

High-stakes scenarios. AI voice scams typically use emergency pretexts: arrest, accident, medical emergency, kidnapping. The stakes are high enough that victims feel they cannot afford to delay action while verifying the caller's identity.

The $25.6 Million Arup Incident

The most dramatic documented case involved engineering firm Arup, where a single deepfake conference call resulted in a $25.6 million loss. The attack used AI-generated video and voice to impersonate the company's CFO on a video call with a finance department employee. The employee, believing they were speaking with multiple senior executives, authorized a series of wire transfers.

This case demonstrates that AI voice scams are not limited to grandparent scams and personal fraud. They are a corporate threat capable of causing losses in the tens of millions.

Seniors: 3x Higher Losses

Adults over 55 lose an average of $1,298 per AI voice scam incident, compared to approximately $400 for younger adults. The disparity exists for several reasons:

FactorWhy Seniors Are More Vulnerable
Voice trustSeniors rely more heavily on voice as identity confirmation
Financial accessSeniors often have greater savings and home equity
IsolationFewer people to consult before taking action
Grandparent scam vectorAI can clone grandchildren's voices from social media
Technology gapLess awareness of AI voice cloning capabilities
Authority complianceStronger tendency to comply with perceived authority figures

The grandparent scam is the most common AI voice attack targeting seniors. A cloned voice of a grandchild calls claiming to be in jail, in a car accident, or in some other emergency requiring immediate money. The emotional connection between grandparent and grandchild, combined with a voice that sounds authentic, creates a powerful manipulation.

For more on protecting elderly family members, see our senior scam protection guide.

The Detection Problem: Only 24% Can Tell

Research consistently shows that most people cannot distinguish AI-cloned voices from real ones. Only about 24% of participants in controlled studies could reliably identify synthetic speech. The technology improves continuously, meaning detection accuracy is likely declining as voice cloning quality increases.

Why Human Detection Fails

  • AI captures micro-patterns in speech that humans recognize subconsciously (cadence, breathing patterns, emphasis)
  • Phone audio quality is limited, which masks the subtle artifacts that might reveal AI generation in high-fidelity audio
  • Emotional states alter voice naturally (stress, crying, fear), so the "it sounds a little off" observation gets attributed to the caller being upset
  • Expectation bias when you believe a call is from someone you know, your brain fills in the gaps and overlooks inconsistencies

Cross-Channel AI Fraud

AI voice cloning does not operate in isolation. It is part of a broader AI-enabled fraud ecosystem:

Voice + phone: Deepfake voice calls impersonating family members, executives, or authority figures.

Voice + text: Scammer makes a deepfake voice call, then follows up with a text message containing a payment link or wire transfer instructions.

Voice + email: AI-generated voice message is combined with a phishing email that appears to come from the same person.

Voice + video: Deepfake video calls (like the Arup case) combine AI-generated face and voice for maximum convincing power.

ScamVerify tracks threats across all these channels. A phone number associated with AI voice scam reports may also be linked to text campaigns, phishing emails, and malicious URLs in the threat database.

How to Protect Yourself

Immediate Steps

  1. Establish a family code word. Choose a word or phrase that only family members know. If someone calls claiming to be a relative, ask for the code word before taking any action.
  2. Verify by callback. Hang up and call the person back at their known number. If your "grandchild" called from an unknown number, call their actual phone directly.
  3. Never send money based on a phone call alone. Legitimate emergencies allow time for verification. If someone demands immediate payment and refuses to let you verify, it is a scam.

Ongoing Protection

  1. Limit voice samples online. The more audio of your voice available publicly (social media, YouTube, podcasts), the easier it is to clone.
  2. Check unknown numbers on ScamVerify before calling back. The number used for the deepfake call may be associated with known scam operations.
  3. Forward suspicious emails to scan@scamverify.ai if the voice scam is accompanied by email follow-ups.
  4. Educate elderly family members about AI voice cloning. Many seniors are not aware that this technology exists.
  5. Report AI voice scams to the FTC at reportfraud.ftc.gov. Include details about the cloned voice and the requested payment method.

For a deeper technical analysis of how AI voice cloning technology works and how to detect it, see our AI voice cloning and deepfake phone scams guide.

Check this number now

Enter any U.S. phone number to check it against 8 million+ federal complaints and real-time carrier data.

FAQ

Can AI really clone my voice from just a few seconds of audio?

Yes. Modern AI voice cloning tools can create a recognizable replica from as little as three seconds of clear audio. Longer samples produce better clones. Any public audio of your voice, including voicemail greetings, social media videos, podcast appearances, and conference presentations, can serve as source material. The technology is freely available and requires no special technical knowledge.

How do I know if a call is using a deepfake voice?

Most people cannot reliably detect AI-cloned voices, especially over phone-quality audio. Instead of trying to detect the fake, verify the caller's identity through an independent channel. Hang up and call the person directly at their known number. Ask a question only the real person would know. Use your family code word. Do not rely on voice recognition alone.

Are AI voice scams illegal?

Using AI to impersonate someone for fraud is illegal under existing wire fraud, identity theft, and computer fraud statutes. The FTC has also issued specific guidance against AI-generated deceptive communications. However, enforcement is challenging because the technology is accessible globally and calls often originate from outside U.S. jurisdiction.

What is the $40 billion projection based on?

Juniper Research projects that AI-enabled fraud losses will grow from $12.5 billion in 2025 to $40 billion by 2027, driven by the increasing sophistication and accessibility of AI tools for voice cloning, text generation, and image/video deepfakes. The projection accounts for growth in both the number of attacks and the average loss per incident.

Should I remove all videos of myself from social media?

That is a personal decision. Reducing publicly available voice samples does limit the source material for cloning, but it may not be practical for everyone. A more effective approach is establishing verification protocols (code words, callback verification) that work even if your voice has been cloned. The goal is to make voice alone insufficient for identity verification.

Photo by Jason Rosewell on Unsplash

Check any phone number, website, text, email, document, or QR code for free.

Instant AI analysis backed by millions of federal records and real-time threat data.

Check Now