
While artificial intelligence (AI) brings a host of benefits to work, learning, and life — from chatbot support, productivity optimization tools, and personalized services — Cybercriminals are also taking advantage of this same technology. to expand the forms of fraud and personal data mining more sophisticated than ever.
More worryingly, when data is leaked, AI can analyze and combine pieces of information at lightning speed, turning seemingly harmless data into weapons for impersonation, financial fraud, deepfakes, or targeted attacks. The explosion of AI not only increases the risk of data leaks, but also makes the consequences of leaks much more serious and long-lasting.
Serious consequences when personal data is leaked
Cybercriminals personalize scams with high precision
When enough personal data is collected, AI can:
This helps criminals create fraudulent scenarios. right context – right time – right concern making it difficult for the victim to recognize. For example:
AI not only increases the reliability of attacks,
It makes the tricks difficult to distinguish with the naked eye.
This personalization is causing the rate of Vietnamese people falling into the trap to increase rapidly according to recent reports.
Deepfakes of faces and voices created by criminals from leaked data
A few portraits from AI trends, selfie videos on social networks or voice recording audio files... are enough for AI models to create realistic deepfakes:
When combined with leaked personal data (name, job, relationship, address, class schedule, etc.), deepfake becomes an extremely effective weapon for intimidation, blackmail, and money transfer fraud. This is becoming a top threat to Vietnamese users, especially parents, the elderly, and groups of individuals with social status who are vulnerable to impersonation.
Losing control of digital identity
Biometric data (face, voice), financial information, address, activity history... once leaked or put into AI models, users irretrievable.
This results in:
Password can be changed
Human data cannot be reset
Data breaches in the AI era are not a “one-time” risk. Data can:
It creates a chain of risks that lasts for years, leaving users exposed to constant phishing attacks without knowing where to start.
Protect yourself from the risks of personal data leaks
How to protect personal data in the AI era? While AI is increasingly difficult to control, users can still reduce risks with proactive measures.
Do not provide photos, videos, or voices to AI trends
If an app asks for camera + photo library access without need → turn off now.
Don't put sensitive data into chatbots and AI tools
Whether using ChatGPT, Claude or any other chatbot, never provide sensitive personal and organizational data such as
- Identifying information (PII): CCCD, household registration, bank information, personal phone number, home address.
- Personal life information: Spending habits, schedules, or personal financial data.
- Internal data, business secrets: Contracts, customer information, statements, legal documents, paper photos.
If you must use AI → always obscure, anonymize, or alter all sensitive data.
Check and limit app access
Enable multi-factor authentication (2FA) for all important accounts
In addition, people need to regularly monitor and update the latest information and warnings from Anti-Fraud as well as reputable organizations and authorities. Continuously updating knowledge is the best way to avoid becoming a victim.