This report examines deepfake-enabled crime trends for 2025, analyzing verified statistics from law enforcement agencies, financial institutions, and cybersecurity research. 

All data points in this report are sourced from authoritative reports, including the FBI's Internet Crime Complaint Center, Deloitte's Center for Financial Services, peer-reviewed academic research, and leading cybersecurity firms. The analysis reveals an exponential acceleration in both the volume and sophistication of deepfake attacks, with global fraud attempts increasing by 3,000% in 2023 alone.

Deepfake fraud growth by region: 2025

This table highlights the rapid growth of deepfake-enabled fraud in 2025, showing regional increases, financial losses, and the primary sectors targeted.

Region

Fraud Increase (2022-2023)

Q1 2025 Losses

Primary Target Sector

North America

1,740%

> $200 million

Financial Services

Asia-Pacific

1,530%

Data not available

Cryptocurrency

Europe

Data not available

Data not available

Banking & Fintech

Global (All Regions)

1,000%+ (10x increase)

Data not available

Cross-sector

Key Findings:

  • North America experienced a 1,740% surge in deepfake fraud between 2022 and 2023, according to identity verification data from Sumsub.

  • Financial losses exceeded $200 million in Q1 2025 alone in North America, representing a significant acceleration of fraud-related damage.

  • The Asia-Pacific region saw comparable growth at 1,530%, with cryptocurrency platforms being the primary target due to digital-native operations and high-value transactions.

Financial impact of deepfake attacks: 2025

This table summarizes the 2025 financial impact of deepfake attacks, highlighting average losses, notable cases, and the key detection challenges for each attack type.

Attack Type

Average Loss per Incident

Notable Case

Primary Detection Challenge

CEO Fraud / BEC

$500,000

$25M (Arup, 2024)

Video conferencing deepfakes

Voice Cloning Scams

Variable (consumer)

77% of victims lost money

3-second audio samples

Identity Verification Bypass

Variable (account-based)

704% increase in 2023

Liveness detection spoofing

Investment / Crypto Scams

$6.5B+ (total, 2024)

Celebrity impersonations

Social media distribution

Key Findings:

  • The average cost of a deepfake incident for businesses in 2024 was nearly $500,000, with large enterprises experiencing losses up to $680,000 per incident.

  • The Arup case represents the most significant single-incident loss to date, where fraudsters used deepfaked video conferencing to impersonate executives and authorize a $25 million wire transfer in February 2024.

  • Investment fraud, particularly cryptocurrency scams, resulted in over $6.5 billion in losses in 2024, according to the FBI's Internet Crime Complaint Center.

Deepfake vulnerability by industry sector: 2025

This table outlines deepfake vulnerability across industry sectors in 2025, showing the share of fraud cases, main attack vectors, and growth trends.

Industry Sector

% of Deepfake Fraud Cases

Primary Attack Vector

Year-over-Year Growth

Cryptocurrency

88%

KYC/IDV bypass

Not specified

Fintech

8%

Identity verification fraud

700% (2023)

Traditional Banking

Data not specified

BEC / CEO fraud

Projected 32% CAGR to 2027

Corporate Enterprise

10%+ (experienced attacks)

Executive impersonation

400+ companies targeted daily

Key Findings:

  • The cryptocurrency sector accounts for 88% of all detected deepfake fraud cases, driven by digital-native operations and irreversible transactions.

  • Fintech experienced a 700% increase in deepfake incidents in 2023, with attackers using sophisticated face-swap technology to bypass liveness detection checks.

  • CEO fraud now targets at least 400 companies per day globally, with deepfake audio and video making business email compromise attacks significantly more convincing.

Detection accuracy and defense gaps: 2025

This table compares deepfake detection methods in 2025, highlighting accuracy rates, key limitations, and the gaps that leave defenses vulnerable.

Detection Method

Accuracy Rate

Key Limitation

Status

Human Detection (Video)

24.5%

Cannot distinguish high-quality fakes

Unreliable

Human Detection (Images)

62%

Better than video, but still inadequate

Marginal

AI Detection (Lab)

Near-perfect

Does not transfer to real-world conditions

Overestimated

AI Detection (Real-World)

45-65%

45-50% accuracy drop vs lab conditions

Inadequate

Audio Deepfake Detection

Variable (low)

Only 1 of 4 free tools detected Biden’s fake

Critical Gap

Key Findings:

  • Human detection accuracy for high-quality deepfake videos is only 24.5%, meaning people are wrong three out of four times when evaluating sophisticated synthetic media.

  • A 2025 study by iProov found that only 0.1% of participants could correctly identify all fake and real media shown to them, effectively eliminating human judgment as a reliable defense mechanism.

  • AI detection tools experience a 45-50% accuracy drop when moved from controlled lab environments to real-world deepfakes, creating a dangerous vulnerability gap.

Enterprise preparedness and awareness: 2025

This table highlights enterprise preparedness for deepfake threats in 2025, showing gaps in protocols, training, and executive awareness alongside associated risk levels.

Preparedness Metric

Percentage

Risk Level

Implication

Companies with no deepfake protocols

80%

Critical

No response plan exists

Employees with no training

50%+

High

Vulnerable to social engineering

Executives unfamiliar with deepfakes

25%

High

Leadership blind spot

Executives who deny increased risk

31%

Critical

Willful ignorance of the threat

Companies that experienced attacks

10%+

Moderate

Growing target surface

Key Findings:

  • 80% of companies have no established protocols for handling deepfake attacks, leaving them systemically vulnerable to fraud schemes.

  • More than half of employees have received no training on recognizing or responding to deepfake threats, despite these attacks targeting individuals as the weakest link.

  • One in four executives has little to no familiarity with deepfake technology, and nearly one-third deny that deepfakes increase their company's fraud risk.

Deepfake creation costs and accessibility: 2025

This table outlines the cost, time, and skill required to create different types of deepfakes in 2025, highlighting how accessible the technology has become.

Deepfake Type

Creation Cost

Time Required

Technical Skill Level

Basic Audio Clone

$1-$20

3 seconds of source audio

Minimal (consumer apps)

Robocall (Political)

$1

< 20 minutes

Minimal

Basic Video Deepfake

$300-$20,000

< 10 minutes (with apps)

Low to moderate

High-Quality Video

$20,000+

Hours to days

Advanced (professionals)

Dark Web Services

$20-thousands

Variable (outsourced)

None (purchase ready-made)

Key Findings:

  • The barrier to entry for deepfake creation has collapsed, with basic audio cloning requiring as little as $1 and three seconds of source audio to create an 85% voice match.

  • The Biden robocall that disrupted the 2024 New Hampshire primary cost just $1 to create and took less than 20 minutes.

  • An entire cottage industry exists on the dark web selling deepfake creation software and services ranging from $20 to thousands of dollars.

Scale of the threat

Deepfake content is exploding, rising from about 500,000 files shared in 2023 to a projected 8 million in 2025. Most people (70%) don’t feel confident distinguishing real voices from cloned ones, yet 40% would act on a voicemail from a loved one, and scams using voice clones have already caused losses for 77% of confirmed victims. While AI detection tools are growing quickly, the threat is growing far faster, with deepfake activity expanding at 900%–1,740% annually in key regions.

Criminal applications

  • CEO Fraud: Executive impersonation can cause massive financial losses, from a €220,000 deepfaked CEO transfer in 2019 to the $25 million Arup video conference scam in 2024.

  • Voice Cloning: AI can replicate voices from just three seconds of audio. One in four adults has encountered an AI voice scam, and one in 10 have been directly targeted.

  • Identity Verification Bypass: Fraudsters use face-swap deepfakes and virtual cameras to defeat liveness detection, with attempts rising 704% in 2023—cryptocurrency accounts for 88% of cases.

  • Investment Scams: Deepfaked celebrities promote crypto and investment fraud, contributing to $6.5 billion in cryptocurrency fraud losses reported by the FBI in 2024.

  • Non-Consensual Intimate Imagery: 96–98% of deepfake content is non-consensual intimate imagery, with nearly all victims female. The U.S. TAKE IT DOWN Act (2025) criminalizes its creation and distribution.

Defense strategies

Traditional defenses fail against deepfakes, so organizations should focus on robust verification procedures rather than detection:

  • Callback verification: Confirm unusual financial requests via a pre-registered phone number.

  • Two-person approval: Require two authorized individuals to approve high-value transactions through separate channels.

  • Code words: Use secret phrases for urgent communications.

The key: design procedures that work even against perfect deepfakes. The Arup attack could have been prevented with such protocols. Move from awareness to preparedness; train employees to follow verification protocols consistently, since detecting deepfakes alone is only 24.5% effective.

Learn More

For more information, you can learn more about Ceartas here and contact us through our integrated chat service if you have any questions.

Sources


Keep Reading

No posts found