Warning: AI is becoming a "weapon" of cybercriminals

Avatar photo

by Editor CLD

Artificial intelligence (AI) has brought about tremendous advances, but at the same time it has been turned into the most powerful attack tool by cybercriminals, creating a wave of scams and intrusions new with unprecedented speed and scale.

Current status and impact

Today, AI has gone beyond its role as a conventional support tool, becoming an automation platform that helps cybercriminals scale and increase the sophistication of their attack campaigns.

  • Terrible growth: The number of cyber attacks using AI is growing rapidly, with a growth rate of up to 62% in just one quarter (as reported by cybersecurity experts).
  • Major damage: AI-enhanced attacks have already caused billions of dollars in global financial losses, with losses reaching $18 billion in 2024 alone.
  • Lowering the barrier: AI is making it easier for even less skilled attackers to create mutating malware (polymorphic malware) and complex phishing scenarios.

New AI attack methods

AI is being “weaponized” by cybercriminals to attack in both non-technical (Social Engineering) and technical forms:

Increasing Social Engineering Fraud

This is the most dangerous method, aimed directly at the human psychology:

  • Deepfake Attack (Audio & Video):
  • AI can create clones authentic voice of relatives and friends with just a few seconds of recording. This is used to request urgent money transfers or bypass voice authentication systems.
  • Create fake videos from a single photo (Deepfake happens on average every 5 minutes once), used for romance scams, impersonating superiors or celebrities.
  • Personalized Phishing (Spear Phishing 2.0):
  • Large Language Models (LLMs) automatically generate phishing content (emails, messages) with perfect grammar and convincing logic.
  • AI is also capable copy writing style of a specific individual (colleague, partner) by analyzing their public data, making fake messages harder to detect.
  • Fraud Rate Increases 10-20 times Because AI automatically analyzes sentiment, screens, and customizes scenarios in real time.

Automated technical attacks

  • Attack and Penetration Automation: AI is programmed to planning, looking for loopholes and Execute 80-90% attack process, thereby helping cybercriminals expand their operations.
  • Polymorphic Malware: AI creates viruses and malware that can change their structure, making them easily bypass traditional antivirus programs.
  • Attack on Defensive AI (Adversarial AI): Criminals seek to inject false data into AI security systems to loss of classification function or conceal malicious activity, rendering organizations' defenses vulnerable.

The biggest flaw: the human factor

According to analysis from Anti-Fraud experts, although technology develops, People's awareness of information security has not yet met the requirements. digital transformation speed

  • AI Dependence: Many people mistakenly think AI is a “know-it-all expert,” used to diagnose diseases and make financial decisions without actual experts.
  • Lack of Cyber Security Culture: Lack of knowledge about forms of psychological manipulation in cyberspace makes it easy for users to become “gaps” in the defense system.

Advice from Anti-Fraud

Experts all confirm Human Factors (Perception) is the biggest flaw. To combat criminal AI, a multi-layered strategy is needed:

For Individuals (Raising Awareness)

Enhanced personal security:

  • Double-check: If you receive an urgent money transfer request via any channel (especially voice/video), please hang up and call back to that person via a previously saved phone number.
  • Anti-Phishing: Be extremely cautious with emails/messages that ask for information or click on links, even if they appear professional and personalized.
  • Multi-Factor Authentication (MFA/2FA): Always enable MFA for all important accounts (Email, Banking, Social Networks) to prevent attackers even if they get your password.

Protect yourself from AI tools

  • Voice/Face Protection: Minimize sharing of detailed personal voice samples, videos, or images on public platforms.
  • Responsible Use of AI: Don't use AI to replace doctors, teachers, or analysts. Always verify the information provided by AI.
  • Practice Psychological Coping Skills: Take the initiative to learn about forms of psychological manipulation so you can respond quickly and appropriately when attacked.

For Agencies/Enterprises

  • Intensive Training: Train employees to recognize Deepfake attacks and Spear Phishing.
  • Using AI to Counter AI: Apply AI-powered security tools to detect automatic attack and abnormal in real time.
  • Multi-Layer Defense: Don't rely on a single security solution; combine human oversight, clear processes, and advanced AI technology.

Leave a Reply

Your email address will not be published. Required fields are marked *