Deepfake Threat Warning: Bypassing Biometric Security

Avatar photo

by Editor CLD

Deepfake technology (a combination of “deep learning” and “fake”), powered by Artificial Intelligence (AI), is taking counterfeiting to a new level, making it harder to detect than ever. While biometrics were previously considered an absolute “shield”, this face and voice simulation tool has now become a serious threat to global financial security and in Vietnam.

Let's go together Anti-Phishing Learn about these risks and the security vulnerabilities criminals are exploiting.

The Dangers of Deepfake AI

Deepfake AI technology is now capable of bypassing biometrics if the security system is not deployed well. Fraudsters use Deepfake to fake faces and voices, then use stolen social media accounts to scam and lure victims into transferring money.

  • Global Financial Damage:
  • Deepfake incidents have increased 10x (corresponding to an increase of more than 900%) globally from 2023–2025.
  • AI-fueled fraud will cost banks and their customers up to 40 billion USD by 2027.
  • Current situation in Vietnam:

In Vietnam, the dangers of artificial intelligence and Deepfake are increasing and becoming clearer. In 2024, the Department of Information Security recorded more than 220,000 reports about online scams and fraud, mostly related to the financial and banking sectors.

Even recent gang busts have shown sophisticated methods: criminals buy account information, phone SIMs, and ask account owners to face video replay to serve the purpose biometric hacking of bank accounts for the purpose of receiving fraudulent funds and money laundering.

Where does the security hole come from?

Many users worry about whether biometric technology is “outdated” when AI tools can create realistic faces to bypass biometric steps.

According to the leader of the Department of Cyber Security and High-Tech Crime Prevention (Ministry of Public Security), the biometric technology systems of banks are all basic guarantee to handle security issues. However, Deepfake technology, with the help of AI, has surpassed conventional technical measures.

Cybersecurity experts have clearly pointed out the problem lies in the implementation stage:

  • Lack of anti-counterfeit check layer:
  • Experts say AI-based spoofing technology is sophisticated enough to bypass basic authentication layers like Face ID or video calls.
  • The problem, however, is not that biometric technology is “outdated,” but that many banks and platforms not yet deployed enough layers of anti-counterfeiting checks (Liveness Detection – detect real people, and deepfake detection).
  • Exploiting application loopholes:
  • Some identity authentication solutions today are easily fooled by still images, videos, or Deepfakes. Part of the reason is that these solutions do not meet quality standards.
  • The bad guys can exploiting app loopholes and biometric fraud without the need for sophisticated AI technology.
  • Some banking apps have been cracked due to poor security, which can then be used by criminals. a portrait photo to bypass biometrics.

Recommendations from experts

To deal with this criminal technology race requires careful coordinated action from both organizations and users:

  • On the part of Banks and Organizations

Banks and credit institutions need to regularly update new developments to detect vulnerabilities. Most importantly:

  • Upgrade multi-layer authentication system, combining AI against Deepfake.
  • When implementing online customer identity authentication solutions (eKYC), applications must have anti-counterfeiting function for Deepfake and still images.
  • Need to increase risk warning for users.
  • On the User Side

Protecting your personal information is your first line of defense.

  • Multi-channel verification: When receiving a request to transfer money or sensitive information via Deepfake call/video, use another communication channel (known phone number, SMS text) to verify directly with that person.
  • DO NOT SHARE: Do not share your password, OTP code, or identification information with any unauthenticated individuals/organizations.
  • Beware of strange links: Avoid accessing strange links of unknown origin or with unusual promotional/offer content.

Remember, Technology is only truly effective when both organizations and users act cautiously and proactively..


Leave a Reply

Your email address will not be published. Required fields are marked *