Online fraud 2025-2026: As cybercriminals evolve thanks to AI.

Avatar photo

by Editor CLD

Sự dịch chuyển lừa đảo trực tuyến 2026 infographic

If 2025 was marked by large-scale data breaches and refined traditional scams, then 2026 is the year cybercriminals truly enter the era of AI. From simple impersonation via text messages and emails, we now face "virtual characters" that can talk, appear in videos, and even guide victims through complex scams across multiple platforms.

It's clear that the fight against online fraud is shifting from "detecting fake technology" to "identifying unusual behavior and processes." Let's explore this further. Anti-Phishing Learn about the shifts in online fraud trends in 2025-2026.

The online scam landscape in 2025

Data leaks: A goldmine for cybercriminals.

2025 saw a series of serious data breaches in Vietnam and Southeast Asia. Personal information from banks, e-commerce platforms, delivery apps, and even government agencies was stolen on an unprecedented scale.

Why is this dangerous?

Leaked data isn't just lists of names and phone numbers. It includes transaction history, consumer habits, family relationships, and even biometric data. Criminals use this information to:

  • Create a detailed profile. regarding each potential victim
  • Create personalized scam scenarios. based on actual circumstances
  • Increase the success rate. because the victim believed that "they knew me too well."“

A typical example: criminals know you've just made a purchase on an e-commerce platform, they call you impersonating a delivery driver with the same order information, and ask you to "verify" it via a fake link.

Offensive but effective technique

In 2025, common scam techniques will still revolve around:

  • Ransomware and Malware:
  • Attacks targeting small and medium-sized businesses lacking security systems.
  • Encrypting data and demanding ransom in cryptocurrency.
  • Exploiting employees working remotely with poorly secured personal devices.
  • Phishing/Smishing via Email and SMS
  • Falsifying notices from banks, tax authorities, and power companies.
  • Create a sense of urgency: “Your account will be locked in 24 hours”
  • Links to fake websites are almost indistinguishable from the real ones.
  • Investment fraud and Ponzi scheme online
  • Promising high returns and low risk.
  • Using images of celebrities or fake "financial experts"
  • Create a closed community group to generate a bandwagon effect.

Scam Center: An industrial-scale fraudulent manufacturing facility.

A worrying phenomenon is the rise of "scam centers"—large-scale organized fraud centers that operate like legitimate businesses with:

  • Clearly defined roles: scriptwriter, caller, money launderer.
  • Modern facilities in border areas that are difficult to control.
  • Cross-border scams targeting victims in multiple countries.

These scam centers focus particularly on romance scams, financial investment scams, and impersonating government officials.

2026: When AI becomes an "ally" of powerful criminals.

The invasion of Deepfake and Voice Clone

Deepfake and voice cloning technologies have advanced to the point where they can create nearly perfect fake videos and voices at low cost and in a short amount of time. With just a few seconds of original audio or video (easily obtained from social media), criminals can:

  • Create a fake video call from a relative, colleague, or boss.
  • Voice recording of CEO instructing accountant to transfer funds urgently.
  • Impersonating bank officials or police officers via video call to increase credibility.

Here's why this trick is dangerous:

  • People tend to believe what they see with their own eyes.
  • Time pressure caused the victim to skip verification steps.
  • Technology is developing faster than the average user can comprehend it.

Multi-channel attack: A scam scenario that surrounds the victim.

2026 marks a shift from single-channel phishing to multi-channel phishing – a coordinated campaign across multiple platforms:

Here's an example of a complete scenario:

  • Phase 1 – SMS outreach: The victim received a text message from the "bank" about an unusual transaction.
  • Phase 2 – Applying pressure via email: Confirmation email with logo and correct formatting.
  • Phase 3 – AI Chatbot on Facebook: The bot automatically answers questions on "how to protect your account".“
  • Phase 4 – Fake Video Call: “"Bank employee" makes video call to confirm, requests OTP.
  • Phase 5 – Telegram/Zalo for “emergency support”: Guide the victim through the final steps.

Each channel reinforces the credibility of the others, creating a closed "scam ecosystem.".

AI optimizes the attack lifecycle.

Currently, cybercriminals are using AI to scale up scams from "manual" to "industrial." AI chatbots have been exploited to become "sleepless scam consultants" capable of:

  • Automated responses 24/7 with a high degree of naturalness.
  • Learn from the conversation to adjust your tactics.
  • It is possible to handle thousands of victims simultaneously.
  • Pretend to sympathize, create an emotional connection with the victim.

Beyond simply reaching victims, AI is also used to:

  • Scanning and analyzing millions of social media profiles to find potential victims.
  • Write automated code to create fake websites in minutes.
  • Optimize attack timing based on the victim's online behavior.
  • Personalize content: Write "bait" that perfectly matches the psychology and circumstances of each target audience based on collected data.

The shift in online fraud

If 2025 was the period of accumulating "fuel" from data, then 2026 is the time when cybercriminals will activate their automated attack systems on a large scale. Below is a detailed analysis of this shift:

Aspects/ArraysKey features of 2025Shift/Escalation in 2026Related technologiesLevel of risk (Inference)
Scams targeting individual users.The number of victims has decreased, but the economic damage is significant. Scenarios impersonating banks, government agencies, delivery drivers, and investment firms are becoming common.Scams are being "AI-ized." Deepfakes, voice clones, and virtual characters are being used to create more convincing impersonations through fake videos, impersonating officials, or posing as relatives.Deepfake, Voice clone, Virtual characterThe price is very high due to its sophisticated psychological manipulation capabilities, making it difficult for the average user to distinguish between genuine and fake products.
Attack surfaceMobile and web are the primary platforms for individual and organizational users.Mobile has become the biggest risk surface due to its connection with banking activities, digital identity, customer service, and e-wallets.Mobile Banking, Digital Identity Verification (E-KYC), E-walletThe price is very high because mobile phones contain all of an individual's most sensitive information.
Malware / RansomwareData theft for blackmail is a major trend in the context of rapid digitalization.AI is being used to optimize the attack lifecycle: from reconnaissance and malware writing to automated exploitation, making detection more difficult.Ransomware, AI-assisted malware decryption.Extremely serious for businesses and critical infrastructure due to the rapid pace of the attack.
The "luring" technique“They primarily use familiar scam scenarios, carried out via single-channel or dual-channel methods.Multi-channel attacks are becoming the norm: luring victims from SMS/social media to chat apps and fake payment sites. AI-powered conversations operate 24/7.Multi-channel, AI chatbotHigh because it creates a closed ecosystem of scams, trapping victims on multiple platforms simultaneously.
“The scam industryFraud centers are highly active in Southeast Asia, linked to scams, money laundering, and transnational organized crime.As the pattern of crime continues to expand into more areas, the pressure from fraud is unlikely to decrease in the short term.Transnational criminal networks, digital money laundering systems.High prices create regional cybersecurity instability and pose significant challenges to law enforcement.
Personal data & leaksData continues to fuel scams. It is bought, sold, and exploited to target more precise attacks.Criminals are exploiting the "data/identity update" theme to commit fraud through impersonation as legal frameworks begin to tighten.Data mining, Social engineeringModerate to high, posing a direct threat to individual privacy and financial security.

Protective Shield: 6 Golden Rules Against Scams

To combat the sophistication of AI-driven crime, follow these principles:

  • Be wary of images/voices: Always turn off your phone and call back the registered number to verify when a money transfer request is made.
  • Blocking multi-channel lure paths: Do not click on unfamiliar links; type the website name yourself or use the official app.
  • Verify before transferring money: Always cross-check through multiple communication channels before executing a transaction.
  • Protect your mobile device: Treat your phone like a wallet; update your operating system regularly and avoid installing apps from unknown sources.
  • Managing personal data: Limit sharing sensitive information (identification documents, airline tickets) on social media; use two-factor authentication (2FA).
  • Stop – Save – Report Procedure: If you have any doubts, stop interacting with the bank, save the evidence, and report it immediately to the bank or authorities.
6 nguyên tắc chống lừa đảo trực tuyến infographic

In the age of AI, vigilance lies not only in verifying information but also in adhering to rigorous authentication procedures. Don't let emotions override your safety.


Leave a Reply

Your email address will not be published. Required fields are marked *