
If 2025 was marked by large-scale data breaches and refined traditional scams, then 2026 is the year cybercriminals truly enter the era of AI. From simple impersonation via text messages and emails, we now face "virtual characters" that can talk, appear in videos, and even guide victims through complex scams across multiple platforms.
It's clear that the fight against online fraud is shifting from "detecting fake technology" to "identifying unusual behavior and processes." Let's explore this further. Anti-Phishing Learn about the shifts in online fraud trends in 2025-2026.
The online scam landscape in 2025
Data leaks: A goldmine for cybercriminals.
2025 saw a series of serious data breaches in Vietnam and Southeast Asia. Personal information from banks, e-commerce platforms, delivery apps, and even government agencies was stolen on an unprecedented scale.
Why is this dangerous?
Leaked data isn't just lists of names and phone numbers. It includes transaction history, consumer habits, family relationships, and even biometric data. Criminals use this information to:
- Create a detailed profile. regarding each potential victim
- Create personalized scam scenarios. based on actual circumstances
- Increase the success rate. because the victim believed that "they knew me too well."“
A typical example: criminals know you've just made a purchase on an e-commerce platform, they call you impersonating a delivery driver with the same order information, and ask you to "verify" it via a fake link.
Offensive but effective technique
In 2025, common scam techniques will still revolve around:
Scam Center: An industrial-scale fraudulent manufacturing facility.
A worrying phenomenon is the rise of "scam centers"—large-scale organized fraud centers that operate like legitimate businesses with:
These scam centers focus particularly on romance scams, financial investment scams, and impersonating government officials.
2026: When AI becomes an "ally" of powerful criminals.
The invasion of Deepfake and Voice Clone
Deepfake and voice cloning technologies have advanced to the point where they can create nearly perfect fake videos and voices at low cost and in a short amount of time. With just a few seconds of original audio or video (easily obtained from social media), criminals can:
- Create a fake video call from a relative, colleague, or boss.
- Voice recording of CEO instructing accountant to transfer funds urgently.
- Impersonating bank officials or police officers via video call to increase credibility.
Here's why this trick is dangerous:
- People tend to believe what they see with their own eyes.
- Time pressure caused the victim to skip verification steps.
- Technology is developing faster than the average user can comprehend it.
Multi-channel attack: A scam scenario that surrounds the victim.
2026 marks a shift from single-channel phishing to multi-channel phishing – a coordinated campaign across multiple platforms:
Here's an example of a complete scenario:
Each channel reinforces the credibility of the others, creating a closed "scam ecosystem.".
AI optimizes the attack lifecycle.
Currently, cybercriminals are using AI to scale up scams from "manual" to "industrial." AI chatbots have been exploited to become "sleepless scam consultants" capable of:
- Automated responses 24/7 with a high degree of naturalness.
- Learn from the conversation to adjust your tactics.
- It is possible to handle thousands of victims simultaneously.
- Pretend to sympathize, create an emotional connection with the victim.
Beyond simply reaching victims, AI is also used to:
- Scanning and analyzing millions of social media profiles to find potential victims.
- Write automated code to create fake websites in minutes.
- Optimize attack timing based on the victim's online behavior.
- Personalize content: Write "bait" that perfectly matches the psychology and circumstances of each target audience based on collected data.
The shift in online fraud
If 2025 was the period of accumulating "fuel" from data, then 2026 is the time when cybercriminals will activate their automated attack systems on a large scale. Below is a detailed analysis of this shift:
| Aspects/Arrays | Key features of 2025 | Shift/Escalation in 2026 | Related technologies | Level of risk (Inference) |
| Scams targeting individual users. | The number of victims has decreased, but the economic damage is significant. Scenarios impersonating banks, government agencies, delivery drivers, and investment firms are becoming common. | Scams are being "AI-ized." Deepfakes, voice clones, and virtual characters are being used to create more convincing impersonations through fake videos, impersonating officials, or posing as relatives. | Deepfake, Voice clone, Virtual character | The price is very high due to its sophisticated psychological manipulation capabilities, making it difficult for the average user to distinguish between genuine and fake products. |
| Attack surface | Mobile and web are the primary platforms for individual and organizational users. | Mobile has become the biggest risk surface due to its connection with banking activities, digital identity, customer service, and e-wallets. | Mobile Banking, Digital Identity Verification (E-KYC), E-wallet | The price is very high because mobile phones contain all of an individual's most sensitive information. |
| Malware / Ransomware | Data theft for blackmail is a major trend in the context of rapid digitalization. | AI is being used to optimize the attack lifecycle: from reconnaissance and malware writing to automated exploitation, making detection more difficult. | Ransomware, AI-assisted malware decryption. | Extremely serious for businesses and critical infrastructure due to the rapid pace of the attack. |
| The "luring" technique“ | They primarily use familiar scam scenarios, carried out via single-channel or dual-channel methods. | Multi-channel attacks are becoming the norm: luring victims from SMS/social media to chat apps and fake payment sites. AI-powered conversations operate 24/7. | Multi-channel, AI chatbot | High because it creates a closed ecosystem of scams, trapping victims on multiple platforms simultaneously. |
| “The scam industry | Fraud centers are highly active in Southeast Asia, linked to scams, money laundering, and transnational organized crime. | As the pattern of crime continues to expand into more areas, the pressure from fraud is unlikely to decrease in the short term. | Transnational criminal networks, digital money laundering systems. | High prices create regional cybersecurity instability and pose significant challenges to law enforcement. |
| Personal data & leaks | Data continues to fuel scams. It is bought, sold, and exploited to target more precise attacks. | Criminals are exploiting the "data/identity update" theme to commit fraud through impersonation as legal frameworks begin to tighten. | Data mining, Social engineering | Moderate to high, posing a direct threat to individual privacy and financial security. |
Protective Shield: 6 Golden Rules Against Scams
To combat the sophistication of AI-driven crime, follow these principles:

In the age of AI, vigilance lies not only in verifying information but also in adhering to rigorous authentication procedures. Don't let emotions override your safety.