Data Security in the Age of AI: When Personal Information Becomes the Target of Cybercrime

Avatar photo

by Editor CLD

While artificial intelligence (AI) brings a host of benefits to work, learning, and life — from chatbot support, productivity optimization tools, and personalized services — Cybercriminals are also taking advantage of this same technology. to expand the forms of fraud and personal data mining more sophisticated than ever.

More worryingly, when data is leaked, AI can analyze and combine pieces of information at lightning speed, turning seemingly harmless data into weapons for impersonation, financial fraud, deepfakes, or targeted attacks. The explosion of AI not only increases the risk of data leaks, but also makes the consequences of leaks much more serious and long-lasting.

Serious consequences when personal data is leaked

Cybercriminals personalize scams with high precision

When enough personal data is collected, AI can:

  • predict spending behavior,
  • identify living habits,
  • analyze interests, relationships, online activities,
  • build a “behavioral profile” of each person.

This helps criminals create fraudulent scenarios. right context – right time – right concern making it difficult for the victim to recognize. For example:

  • E-mail fake bank Contains the correct logo of the branch you usually do business with.
  • Deepfake calls impersonate loved ones with the correct voice and correct language.
  • Winning text messages based on the brands you shop with.

AI not only increases the reliability of attacks,
It makes the tricks difficult to distinguish with the naked eye.

This personalization is causing the rate of Vietnamese people falling into the trap to increase rapidly according to recent reports.

Deepfakes of faces and voices created by criminals from leaked data

A few portraits from AI trends, selfie videos on social networks or voice recording audio files... are enough for AI models to create realistic deepfakes:

  • Fake video asking for money transfer.
  • Fake sensitive clip for blackmail.
  • Fake voice to steal OTP.
  • Aging/rejuvenation videos are being used for bad purposes.

When combined with leaked personal data (name, job, relationship, address, class schedule, etc.), deepfake becomes an extremely effective weapon for intimidation, blackmail, and money transfer fraud. This is becoming a top threat to Vietnamese users, especially parents, the elderly, and groups of individuals with social status who are vulnerable to impersonation.

Losing control of digital identity

Biometric data (face, voice), financial information, address, activity history... once leaked or put into AI models, users irretrievable.

This results in:

  • Fake profiles are created with just a few AI operations.
  • Information was traded and reused for years.
  • Bank accounts, social networks, and emails are being targeted constantly.

Password can be changed
Human data cannot be reset

Data breaches in the AI era are not a “one-time” risk. Data can:

  • permanently stored on foreign servers,
  • mass-produced,
  • sold and resold,
  • reused by AI for new models.

It creates a chain of risks that lasts for years, leaving users exposed to constant phishing attacks without knowing where to start.

Protect yourself from the risks of personal data leaks

How to protect personal data in the AI era? While AI is increasingly difficult to control, users can still reduce risks with proactive measures.

Do not provide photos, videos, or voices to AI trends

  • Do not use AI trends that require portrait photography or voice recording, especially apps that transform faces into different styles according to trends such as "cartoon version", "aging / getting younger"...
  • Only use apps that are reputable, transparent, and clear about how they store data.
  • Never upload children's photos to AI conversion apps.

If an app asks for camera + photo library access without need → turn off now.

Don't put sensitive data into chatbots and AI tools

Whether using ChatGPT, Claude or any other chatbot, never provide sensitive personal and organizational data such as

  • Identifying information (PII): CCCD, household registration, bank information, personal phone number, home address.
  • Personal life information: Spending habits, schedules, or personal financial data.
  • Internal data, business secrets: Contracts, customer information, statements, legal documents, paper photos.

If you must use AI → always obscure, anonymize, or alter all sensitive data.

Check and limit app access

  • Read the Privacy section carefully before installing the app.
  • Turn off camera/microphone/location access for unnecessary apps.
  • Only grant the minimum permissions needed to the app (for example, a photo editing app only needs access to selected photos, not your entire library)..
  • Remove apps that ask for unreasonable permissions (e.g. flashlights or AI calculators that need access to contacts or location)
  • Do not log in with Google/Facebook accounts for strange, untrusted applications.
  • Periodically clear cookies and check app permissions on your phone.

Enable multi-factor authentication (2FA) for all important accounts

  • Email, Facebook, banks, e-wallets… all require 2FA.
  • Even if data is leaked, the extra layer of protection still stops most attacks.

In addition, people need to regularly monitor and update the latest information and warnings from Anti-Fraud as well as reputable organizations and authorities. Continuously updating knowledge is the best way to avoid becoming a victim.


Leave a Reply

Your email address will not be published. Required fields are marked *