Menu
AI artificial intelligence concept with digital neural network
ExplainersMarch 5, 2026- Leo

How AI Is Making Scams Harder to Spot in 2026

TLDR

AI is removing the traditional red flags that helped people identify scams. Voice cloning makes the grandparent scam nearly undetectable. AI-generated emails eliminate grammar errors. Deepfakes enable video-call impersonation. ScamVerify™ FTC data shows 154,716 impersonation complaints with a 67% robocall rate, and the 33% human-operated share is where AI cloning has the most impact.

The Three AI Threats to Watch

1. AI Voice Cloning

What it does: Creates a convincing replica of anyone's voice from as little as 3 seconds of audio.

Where scammers get voice samples:

  • Social media videos (TikTok, Instagram, YouTube)
  • Voicemail greetings
  • Conference call recordings
  • Podcast appearances

How it is used in scams:

The grandparent scam is the clearest example. A scammer clones a grandchild's voice and calls their grandparent. The voice sounds identical to the real person. Traditional advice ("that doesn't sound like my grandchild") is no longer effective.

Our FTC data shows impersonation scams have 154,716 complaints. The 33% that use live agents (vs robocalls) are the operations most likely to deploy AI voice cloning because they are already investing in human-like interaction.

The defense: Family code words. A shared secret word that only family members know cannot be cloned by AI.

2. AI-Generated Phishing Content

What it does: Produces grammatically perfect, contextually appropriate phishing emails and text messages at scale.

Before AI: Phishing emails were identifiable by:

  • Poor grammar and spelling
  • Awkward phrasing
  • Generic templates

After AI:

  • Perfect grammar in any language
  • Personalized content using scraped data
  • Adaptive tone that matches the impersonated brand
  • Dynamic content that changes per recipient

ScamVerify tracks 69,088 malicious domains through URLhaus. The phishing pages behind these domains increasingly use AI-generated content that is indistinguishable from legitimate company communications.

The defense: Do not rely on grammar or writing quality as a detection signal. Use systematic checks: sender address, URL verification, authentication headers, and tools like ScamVerify.

3. Deepfake Video

What it does: Creates realistic video of anyone saying or doing anything.

Current scam applications:

  • Fake video calls impersonating executives (for BEC attacks)
  • Celebrity endorsement videos for investment scams
  • Fake news anchors promoting fraudulent products
  • Video "proof" in romance scams

The defense: Verify through independent channels. If your CEO requests a wire transfer over video call, call their known phone number to confirm.

How AI Changes Each Scam Category

Scam TypeFTC ComplaintsPre-AI DetectionPost-AI Detection
Impersonation154,716Voice sounds wrongVoice sounds real (cloned)
Debt reduction345,670Robocall sounds automatedAI voices sound human
Medical113,158Generic scriptsPersonalized health info
Tech support6,857Broken English agentsFluent AI-assisted agents
Phishing (email)N/AGrammar errorsPerfect grammar
Phishing (text)N/ASpelling mistakesFlawless messages

What Still Works for Detection

Despite AI advancements, these checks remain reliable:

Technical Verification

  • Email headers (SPF, DKIM, DMARC) - AI cannot fake authentication results
  • URL verification - the actual domain must be checked, regardless of how good the content looks
  • Phone number lookup - check numbers against FTC/FCC databases on ScamVerify

Behavioral Red Flags

  • Urgency - AI makes content more convincing but scams still rely on time pressure
  • Unusual requests - a CEO asking for gift cards is suspicious regardless of how real the voice sounds
  • Secrecy - "do not tell anyone about this" is a scam tactic that AI does not change

Systemic Protections

  • Call screening - blocks unknown numbers before AI voices matter
  • Multi-factor authentication - stolen credentials alone are not enough
  • Verification callbacks - calling a known number to confirm a request
  • ScamVerify - checks numbers, URLs, texts, and emails against threat databases at scamverify.ai

FAQ

Can AI clone my voice from a phone call?

Technically yes, if the call is recorded. However, most voice cloning requires clear audio samples. Short phone call recordings with background noise produce lower-quality clones. The bigger risk is public audio: social media videos, podcast episodes, and voicemail greetings that provide clear, noise-free samples.

Are AI-generated scam calls illegal?

The FCC ruled in February 2024 that AI-generated voices in robocalls are illegal under the Telephone Consumer Protection Act (TCPA). However, enforcement is challenging because most scam operations are based overseas. The ruling provides a legal basis for action but does not prevent the calls.

How can I tell if a voice on the phone is AI-generated?

In 2026, it is extremely difficult for untrained humans to detect high-quality AI voices in real time. Instead of trying to detect the voice itself, verify the identity through other means: ask a question only the real person would know, use a family code word, or call back on a known number.

Photo by Steve Johnson on Unsplash

Check any phone number, website, text, email, document, or QR code for free.

Instant AI analysis backed by millions of federal records and real-time threat data.

Check Now