The data shows that fraud is becoming faster, more organized, and more commercially driven – a trend expected to continue through 2026. As generative AI (GenAI) and shared methodologies become more accessible, fraud attempts will likely increase in volume and sophistication, forcing organizations to strengthen defences across every point of the identity lifecycle. The report highlights how these threats are evolving, which industries are most affected, and where fraud is heading next.
Trust and Identity
Without trust, identity cannot be verified; without verified identity, trust cannot exist. Today, that critical balance has been upended, as fraudsters around the world increasingly exploit advanced AI to manipulate and deceive. The rapid adoption of AI across all platforms and arenas is enabling offenders to grow more cunning in their tactics and scale at levels and depths never seen before.
The playing field is evolving at breakneck speed, targeting people, their identities, plus the systems designed to protect them.
The report provides a detailed analysis of the primary areas of vulnerability facing individuals and businesses today, and openly offers practical examples of the most common fraud vectors and effective strategies on how to prevent them.
Modus Operandi of Fraudsters
1. They target identity elements.
Fraudsters forge or steal identity documents, impersonate biometrics with deepfakes, or build entirely synthetic identities to bypass verification.
2. They target prevention systems.
Injection attacks, device emulation, and automated bots aim to bypass verification flows and exploit the technology that’s meant to stop them.
3. They target people.
Psychological manipulation – from phishing and impersonation to coercion and romance scams – convinces victims to use their own genuine identity, hand over sensitive data, or transfer funds.
Key Fraud Trends
Social Engineering
Due to the nature of this type of fraud, it’s tough to quantify. Still, coercion, phishing, and impersonation scams are harder to stop because victims are convinced – or forced – to use their own genuine identity credentials.
Deepfake
Deepfakes now part of everyday life and linked to every 1 in 5 instances of biometric fraud.
Crypto
67% of attacks occur at onboarding, often driven by sign-up bonuses.
Payments
82% of fraud attempts target the authentication process.
Digital Banks
55% of fraud happens after onboarding.
GenAI as an enabler of Identity fraud
In 2025, while physical counterfeits accounted for the majority of fraud attempts (47%), digital forgeries were also prevalent, accounting for 35% of attempts. The rise of digital methods is fueled by the accessibility and scalability of modern editing tools, which make it cheaper and faster for fraudsters to manipulate images, replicate templates, and mass-produce convincing forgeries.
GenAI has amplified this trend – enabling fraudsters to create hyper-realistic replicas of identity documents. What once required specialized software and design skills can now be achieved with an open-source model and a few prompts. While these AI-generated fakes can appear authentic to the human eye, they still leave detectable patterns that advanced machine-learning models can identify and block.
Deepfakes Account for 1 in 5 Biometric Fraud Attempts
Deepfake Methods
• Synthetic identities: AI-generated faces that don’t correspond to real people.
• Face swaps: Replacing one person’s face with another in a recorded or live video.
• Animated selfies: Taking a static photo and using AI to add movement.
Presentation Attacks
Fraudsters attempt to fool a biometric system using a fake object, for example, by presenting a printed photo, a mask, or a video of a screen.
Automation and Device Emulation Scale With Fraud-as-a-Service
Device emulation and automation have become key methods for scaling fraud, allowing attackers to mimic real user activity and overwhelm systems at volume. Device emulation involves imitating a legitimate device’s characteristics, such as its operating system or hardware signature, to make fraudulent activity appear authentic.
The Professionalization of Fraud
Fraud today is no longer the work of isolated criminals – it’s a global enterprise. Fraud rings are organized groups that plan and execute complex fraud operations, often spanning industries, regions, and platforms. They have:
• Defined roles: Rings divide labor among recruiters, organizers, enforcers, and technical specialists.
• Sophisticated operations: Rings are capable of coordinating large, multi-step attacks.
• Scale and specialization: Rings range from small cells to hundreds of members, often focusing on a single fraud type such as credit card abuse or identity theft.
Organized fraud rings operate across continents and time zones, ensuring their attacks never stop. Entrust data shows that fraud attempts peak between 2 and 4 am UTC, when defenses in many regions are offline – illustrating how criminals coordinate globally to exploit gaps in coverage.
Attackers can now purchase ready-made kits, credential dumps, and AI-powered deepfake tools directly through encrypted messaging channels and dark web forums. These platforms have made professional-grade fraud available to anyone with minimal technical skill, fueling a surge in volume and sophistication.
Fraud Prevention
Lifecycle Protection
• Onboarding: Document and biometric verification confirm users are who they claim to be before accounts are opened.
• Authentication: Multi-factor and biometric checks protect logins and re-verifications against account takeovers.
• Transactions: Device intelligence, anomaly detection, and behavioral signals safeguard high-risk activities like payments, transfers, or data access.
According to the 2025 Docusign and Entrust Future of Global Identity Verification report, organizations that implement robust identity verification save an average of $8 million per year in fraud-related costs.
The Future of Fraud
The future of fraud prevention lies in identity-centric, AI-driven defense. Organizations that protect every layer of identity – people, documents, biometrics, and systems – will be best equipped to adapt as fraudsters adopt new tools and tactics.
Key Stats
- Cryptos are an especially prime target for scalable, AI-based attacks. They account for 60% of all deepfake fraud, while nearly 50% of document fraud attempts related to crypto companies are digital forgeries.
- Fraudsters follow opportunity. Wherever funds move quickly or verification is minimal, they will find ways to exploit it.
- On average, industries that offer long-term financial gain are twice as likely to experience ATO fraud.
Source: Entrust’s 2026 Identity Fraud Report
Have a cloud or security challenge? Let’s solve it.


