Hackers use AI to commit online fraud: Experts can also fall into the trap
Even experienced cybersecurity professionals fall victim to sophisticated phishing attacks.
A recent study by Kaspersky found that the number of cyberattacks faced by organizations has increased by almost half in the past 12 months, according to 49% of respondents. The most common threat is phishing, with 49% of respondents saying they have encountered this. Half of the survey respondents (50%) expect the number of phishing attacks to increase significantly, as cybercriminals increasingly use AI.
To better understand, Kaspersky analyzed how criminals use AI in phishing scams, as well as why even experienced people can fall victim to AI-powered phishing attacks.
Advertisement
Hackers use AI to commit online fraud: Experts can also fall into the trap Picture 1
Advertisement
Cyber attacks using AI are becoming a trend. (Illustration photo)
When Phishing Attacks Get Personalized with AI
Traditionally, phishing attacks relied on blasting out the same generic messages to thousands of people, luring a select few into the trap. AI has changed that by creating sophisticated, personalized phishing emails at scale. AI-powered tools can use publicly available personal information on social media, job boards, or company websites to create tailored emails that are tailored to each individual's role, interests, and communication style.
For example, a CFO might receive a phishing email that copies the tone and style of a message from the CEO, even referencing recent company events. This level of customization makes it difficult for employees to distinguish between a real message and a phishing message.
Deepfake technology - a powerful weapon in cyber security attacks
Deepfake technology in AI has also become a powerful weapon, used by cybercriminals in scams. Attackers take advantage of this technology to create fake audio and video clips, simulating the voices and appearances of leaders and managers with an astonishing level of accuracy.
For example, in one documented case, an attacker used deepfakes to impersonate multiple employees during an online meeting, convincing one employee to wire transfer approximately $25.6 million. As deepfake technology continues to advance, attacks of this type are expected to become more widespread and sophisticated.
How AI Helps Attackers Bypass Traditional Security Methods?
Cybercriminals can use AI to trick traditional email filtering systems. By analyzing and mimicking legitimate email templates, AI-generated phishing emails can bypass security software checks. Machine learning algorithms can also test and refine scams in real time, increasing the success rate and making scams increasingly sophisticated.
Why Even Experienced People Can Fall for AI-Enhanced Phishing Attacks
Even experienced cybersecurity professionals have fallen victim to sophisticated phishing attacks. The authenticity and personalization of AI-generated content can sometimes overwhelm the skepticism that keeps experienced professionals on their toes. Furthermore, these attacks often play on human psychology, such as urgency, fear, or power, which puts pressure on employees to act without properly verifying the authenticity of the request.
How to Respond to AI-Enhanced Phishing Attacks
To combat AI-based phishing attacks, organizations need to take a proactive, multi-layered approach that focuses on comprehensive cybersecurity systems. Regular updates and training to raise employee awareness of AI-based cybersecurity are critical to helping employees identify sophisticated phishing and other malicious attack tactics. In parallel, businesses should deploy robust security tools that can detect anomalies in emails, such as suspicious sentence patterns or metadata.
The 'zero-trust' security model also plays a key role in reducing the risk of damage from attacks. By limiting access to sensitive systems and data, it ensures that attackers cannot compromise the entire network even if they bypass one layer of security. Together, these measures form a comprehensive 'shield' of defense, combining advanced technology with close human oversight.
Hackers use AI to commit online fraud: Experts can also fall into the trap Picture 2
Saved post successfully
You can review saved articles on the Saved Articles page.
Agree
You should read it
- Hackers use a map to track the situation of the Corona virus to spread malware
- The world's hottest 'hackers' go to court
- How to detect scam online
- Thousands of websites use COVID-19 to scam and distribute malware created every day
- Funny caption of hackers spreading WannaCry malicious code
- Detect dangerous macOS virus developed by Chinese hacker group
- Downloaded malware? Try these fixes before factory reset!
- What to do to handle 'No Internet After Malware Removal' error?
- 5 types of malware on Android
- Answer these 5 questions before clicking on any link
- Don't fall for these Reddit scams that are waiting to install malware on your computer!
- Hackers take advantage of the panic in the Corona virus epidemic to spread malware on the internet
May be interested
Ransomware turns 35, how terrible was the world's first attack?
HOT Warning: Trick to force employees to become online scammers
New Microsoft 365 Attack Can Break 2FA
Dangerous 'Helldown' Ransomware Warning Expands to Linux and VMware
SteelFox Trojan: Malware Turns PCs Into Cryptocurrency Mining Zombies
Remcos Alert: Ingenious Excel Phishing Campaign Spreading Dangerous Fileless Malware