Cyber Justice Law Group

AI and Impersonation Scams

Learn how artificial intelligence is accelerating the rise of impersonation scams and how you can still spot them.

Learn how artificial intelligence is accelerating the rise of impersonation scams and how you can still spot them.

In early 2024, a finance employee at engineering firm Arup joined a video call that appeared to include the company's CFO and several colleagues. The people on the screen looked real. The voices sounded normal. The request felt routine.

But, by the end of the call, 15 transfers totaling about $25.6 million had been sent to scammers using deepfake technology, according to reporting summarized by the World Economic Forum.

This fraud was not powered by one fake email, one stolen headshot, or one badly written script. It was facilitated by a stack of tools that hid the scammers' identity behind a wall of nascent technologies. The underlying innovation that makes all of this progress possible? Artificial intelligence.

AI By the Numbers

AI has helped to further industrialize the impersonations cam. It lowers the skill floor, reduces the cost of sophisticated tech stacks, and reduces the head count of complex scam operations that once required dozens of workers,

Recent data supports this shift. Keepnet Labs reported a 1,740% jump in deepfake fraud in North America between 2022 and 2023. Vectra AI reported that AI scams surged 1,210% in 2025 while traditional fraud grew 195%. Veriff reported that deepfake attacks now drive one in every 20 identity-verification failures.

For victims, the practical consequence is that spotting scams is much more difficult. Scammers can easily clone the voice of a loved one, fake live video interactions, and translate messages flawlessly.

AI is creating a more dangerous world for scam victims. It's more important than ever to understand what's changed and how you can respond if you've been taken advantage of.

And for a broader overall breakdown of impersonation scams, see our Ultimate Guide to Impersonation Scams.

How AI Changed Impersonation Scams

AI expands the impersonation scammer's toolbox. Different tools solve different problems for scammers, but they all point in the same direction: more believable fraud delivered at greater speed and scale.

How AI Powers Impersonation

The AI Scam Toolkit

Four capabilities that let scammers look believable, sound believable, and stay consistent long enough to move real money.

🎙️Voice Cloning
Sound Like Anyone

Solves the problem of live phone verification. A few seconds of public audio is enough to mimic a relative, executive, or support rep.

📹Deepfake Video
Look Like Anyone

Solves the "get on camera" test. Real-time deepfakes can carry a live conversation, not just play a pre-recorded clip.

🪪Synthetic Identities
Become Anyone

Solves the background-check problem. AI can generate photos, fake IDs, social profiles, and supporting documents from scratch.

💬LLM Conversation
Stay Consistent Forever

Solves the fatigue problem. AI-written messages stay polished, on-tone, and consistent across days or weeks of sustained contact.

CyberJustice Law Group — cyberjustice.law
Sources: McAfee, Veriff, Keepnet Labs, Norton, Sardine AI, Sift

Voice cloning

Voice cloning is one of the most accessible AI scam tools because it requires minimal source material. McAfee reported that only three seconds of audio may be enough to produce a strong vocal match. Public clips on social media, podcasts, YouTube, or voicemail greetings can easily provide that much raw material.

This is particularly salient in the case of family-emergency scams. It works like this:

  • A scammer calls their victim using the cloned voice of a child, spouse, or grandchild claiming to need immediate medical or financial assistance due to a car crash or accident.
  • The victim does not take the time to verify the caller's identity in part because the voice clone is so convincing, but also because the fake emergency introduces a synthetic sense of urgency.
  • Even if it seems strange to the victim that their loved one is asking them to deposit crypto or cash to an unknown account, these red flags are overridden by feelings of panic and empathy.

These scams don't have to survive close scrutiny for long, only until the victim's adrenaline wears off. This same tactic is used to impersonate banks, customer support, executives, and business colleagues.

As with any other AI-enabled scam tactic, voice cloning is on the rise. DeepStrike reported that phishing using voice cloning (vishing) grew 442% from the first half of 2024 to the second half.

Deepfake video

Until deepfake video technology emerged, refusal to chat live on video was a glaring red flag. Now, fraudsters can use AI to interact live with victims in high fidelity representations of their favorite celebrity or other stolen identity.

It may be difficult to imagine how these fakes fool every day people. In reality, it's a common occurrence. Keepnet Labs put human detection accuracy for high-quality video deepfakes at only 24.5%.

Scammers success stories abound:

  • Norton documented repeated deepfake videos of Elon Musk promoting fake crypto giveaways.
  • Cybernews reported a case in which a British widow lost GBP500,000 in a romance scam involving a deepfaked Jason Momoa identity.

AI has turned one of the most reliable ways to spot a scammer on its head. It's no longer enough to simply ask "Let's video chat," and drop the conversation if they refuse. Even if the answer is yes, proceed cautiously with anyone you've met online.

Synthetic identities and fake documents

Not every impersonation scam now depends on stealing a real person's identity. Sometimes the scammer manufactures one from scratch. A synthetic persona can include AI-generated photos, fake IDs, supporting documents, employment claims, active-looking social profiles, and other details built to survive casual verification.

The FBI has warned that criminals are using generative AI to create fraudulent identification documents, fake credentials, and AI-generated images to support financial fraud. Sardine AI described services capable of producing realistic digital IDs for about $15, and Veriff reported that modern tools can generate fake IDs directly from a text prompt.

That means surface-level checks are weaker than they used to be. A victim might search a name, find a profile, see supporting paperwork, and conclude that the person must be real. But the profile, the paperwork, and the supporting details may all have been generated as part of the same fraud package.

LLM-written conversation

Large language models solve yet another problem for scammers: consistency. Historically, many scams unraveled or never got off the ground because the scammer couldn't maintain proper English, forgot details of their conversations, or could not maintain a convincing voice over time. LLMs minimize these weaknesses.

Norton has described chatbot-driven scams that can hold natural conversations around the clock. Sift reported that more than 82% of phishing emails now show signs of AI assistance. Vectra AI cited research finding that 40% of BEC emails are primarily AI-generated.

Impersonation scams often depend on sustained narrative control. A fake support agent has to sound technical. A fake executive has to sound authoritative. A fake romantic partner has to sound emotionally available. AI helps scammers keep those voices steady for days or weeks at a time.

Put together, these tools create a more dangerous scam environment. The modern impersonation scam is a system over individual tools: cloned voice, fake video, believable writing, and supporting records all working together.

How To Protect Yourself In The AI Era

In the AI era, verification has to move outside the interaction the scammer controls.

That means the best anti-scam habits now look a little different than they did previously:

  • Verify out of band. If your bank, exchange, employer, relative, or a government contact reaches out, end the interaction and contact the real institution or person using a phone number, app, or website you found independently.
  • Use family code words. A short private phrase can stop a cloned-voice emergency scam much more reliably than trying to judge whether the audio sounds real.
  • Treat urgency as evidence. The more pressure the person applies to keep you in the conversation, the more likely it is that independent verification would break the scam.
  • Distrust proof that stays inside the same channel. Caller ID, email signatures, video calls, profile photos, and attached IDs may all be part of the package.
  • Slow down money movement. Dual approval, a short waiting period, or even a forced callback can prevent catastrophic losses.
  • Assume polished phishing exists. Correct grammar, personal details, and a convincing tone are no longer signs that a message is safe.
Updated Defenses

The New Verification Rules

Verification has to move outside the interaction the scammer controls. These six habits replace the old shortcuts.

📞Out-of-Band Verification
If a bank, exchange, employer, or relative contacts you first, end the interaction and call back on a number you already trust.
Never verify identity using information supplied by the person who reached out to you.
🔑Family Code Words
Agree on a short private phrase with close family members that a voice clone would not know.
Use it any time an emergency call asks for money, gift cards, or crypto.
⏱️Treat Urgency as Evidence
The more a caller pressures you to stay in the conversation, the more likely verification would break the scam.
Legitimate institutions will not punish you for hanging up and calling back.
🔗Distrust Same-Channel Proof
Caller ID, video calls, email signatures, profile photos, and attached documents may all be part of the fraud package.
Proof only counts if it comes from a source you found independently.
🐢Slow Down Money Movement
Dual approval, a short waiting period, or a forced callback can prevent catastrophic losses.
This is especially critical for crypto transfers, wire payments, and seed-phrase requests.
✉️Assume Polished Phishing Exists
Correct grammar, personal details, and a convincing tone are no longer signs a message is safe.
Over 82% of phishing emails now show signs of AI assistance.
CyberJustice Law Group — cyberjustice.law
Sources: Sift, McAfee, Norton

If you remember only one rule, remember this one: do not verify identity using information supplied by the person who reached out to you.

What To Do If You Already Sent Money

If you have already been targeted or have already sent money or crypto, the first thing to do is take a deep breath. Remember, you are not alone in this, and you have resources. That being said, it is extremely likely someone will try to scam you again.

If anyone reaches out to you directly on social media or WhatsApp and guarantees to recover your funds for an upfront fee, they are a scammer. Don't respond. Do your homework. If you do want to pursue recovery, find a licensed, U.S.-based lawyer.

Once you've got your wits about you, start with these steps:

  1. Stop the conversation and stop sending money.
  2. Preserve screenshots, usernames, wallet addresses, transaction hashes, call logs, emails, and profile links.
  3. Lock down passwords, two-factor authentication, and any accounts the scammer may have touched.
  4. Report the fraud to the relevant bank, exchange, platform, payment provider, the FTC, and the FBI's IC3.
  5. Evaluate quickly whether the funds may still be traceable, frozen, or recover. An attorney with blockchain expertise can help with this, but they will never guarantee recovery. A real lawyer will always speak with you in depth about your case and thoroughly review the evidence before taking you on as a client.

There are paths to recovery for some victims. Crypto, for instance, always leaves a trail, despite being anonymous. Moving quickly but prudently to preserve evidence before reaching out to an attorney can make a world of difference.

How A Crypto Recovery Lawyer Can Help

AI impersonation scams are synonymous with cryptocurrency investment fraud, wire fraud, fake support operations, synthetic identities, and illegal cross-border actors. This industry is complex, and changing at an extremely rapid rate.

This means that the legal avenues for asset recovery are also evolving. Crypto recovery attorneys are still testing who can be held liable (whether it be the exchanges, organized crime rings, or individuals) and how best to achieve positive outcomes for their clients.

An attorney who can be helpful to you as a victim will:

  • Have extensive experience tracing funds on chain. They will be certified and familiar not only with the law, but also with blockchain technology.
  • Be honest with you about your chances. Not every case has a reasonable chance for recovery. Don't listen to anyone who makes promises too good to be true.
  • Explain the process. As a client, you should understand where your case stands and what's to come. These cases can take years to play out, and you deserve an open line of communication.
  • Be innovative. There are only a few firms in the United States actively and effectively pursuing recovery for victims of crypto fraud. Be sure you hire one that's on the cutting edge.

CyberJustice Law Group is a U.S.-based, licensed law firm with verifiable credentials. If you lost money in an impersonation scam, you can contact us for a free video consultation.

We can evaluate what evidence you have, what kind of scam you are dealing with, and whether meaningful recovery pathways may still exist.

AI and impersonation scams FAQ

How is AI used in impersonation scams?
Scammers use AI for voice cloning, deepfake video, and AI-generated text to impersonate trusted people or institutions. Voice clones can be made from seconds of audio; deepfakes can mimic a CEO or family member on a video call; and AI-polished phishing emails and support chats eliminate the grammar and spelling tells that used to expose fakes. The result is that impersonation scams are harder to spot and more profitable for criminals.
Can AI clone someone's voice from a short clip?
Yes. Modern voice-cloning tools can produce a convincing clone from as little as a few seconds of audio taken from social media, podcasts, or videos. That is why family-emergency and fake-support calls sound more realistic than they used to. Verifying through a separate channel (e.g., calling back on a known number or using a pre-agreed code word) is more important than ever.
How can I tell if a video call is a deepfake?
Deepfake video is improving quickly, so a single video call is no longer reliable proof of identity. Red flags include odd lighting or motion, stiff expressions, or the person avoiding specific questions. The best defense is to verify through multiple channels: call the real company or person on a known number, use a pre-agreed phrase, and be suspicious of anyone who pressures you to act before you can verify.
What should I do if I lost money to an AI-powered impersonation scam?
Stop all contact with the scammer, secure your accounts, and preserve every piece of evidence (screenshots, call logs, wallet addresses, transaction hashes). Report to the FTC, FBI IC3, and any exchange or bank involved. Then consider speaking with a crypto fraud attorney; blockchain tracing and legal action can sometimes lead to recovery. Be wary of anyone who contacts you afterward offering to recover the funds for an upfront fee—that is often a recovery scam.