Can cybercriminals use ChatGPT to hack your bank or PC?

ChatGPT can be used for nefarious things. For instance, hackers can use it to create malicious content, such as writing fake emails to gain access to your PC or even your bank account.

Since its launch, ChatGPT, the OpenAI chatbot, has been used by millions of people to write text, create music, and program. But as more and more people use AI chatbots, it's important to consider the security risks.

Like any technology, ChatGPT can be used for nefarious purposes. For instance, hackers can use it to create malicious content, such as writing fake emails to gain access to your PC or even your bank account.

ChatGPT can help cybercriminals hack your PC

Can cybercriminals use ChatGPT to hack your bank or PC? Picture 1

Hackers, including new ones, can use ChatGPT to create new malware or improve existing malware. Some cybercriminals have used chatbots, especially earlier versions, to write code that can encrypt files.

To prevent such use cases, OpenAI has implemented mechanisms to reject the prompt asking ChatGPT to generate malware. For instance, if you ask the chatbot to 'write malware' it won't do so. Even so, it is still easy for cybercriminals to overcome these content censorship barriers.

 

By acting as a pentest performer, a threat actor can rewrite their request to trick ChatGPT into writing code, which they can then modify and use in cyberattacks.

A report by Check Point, an Israeli security company, indicates that a hacker may have used ChatGPT to create basic Infostealer malware. The security firm also discovered another user who claimed ChatGPT helped him build a multi-layer encryption tool that could encrypt some files during the ransomware attack.

The researchers asked ChatGPT to generate malicious VBA code that could be implanted in a Microsoft Excel file, then infect a PC if opened and successfully done. Additionally, there are claims that ChatGPT can encode malware that is capable of tracking your keyboard stroke.

Can ChatGPT hack your bank account?

Can cybercriminals use ChatGPT to hack your bank or PC? Picture 2

Many data breaches begin with a successful phishing attack. Phishing attacks typically involve a malicious actor sending recipients an email containing legitimate-looking documents or links that, when clicked, can install malware on their device. This way, the code from ChatGPT doesn't need to hack your bank account directly. Someone just uses ChatGPT to trick you into giving them access.

Fortunately, you can easily spot most traditional scams; Grammatical errors, typos, and odd phrases are often their hallmarks. But these are all mistakes that ChatGPT rarely makes, even when used to compose malicious emails aimed at phishing.

 

When used in phishing scams, messages that appear to come from a legitimate source can make it easy for victims to provide their personally identifiable information, such as bank passwords.

​If your bank sends you an email message, consider going directly to your bank's website instead of clicking on any embedded links. Clicking on random links and attachments, especially those that ask you to log in somewhere, is never a good idea.

ChatGPT can help drive phishing campaigns because it can quickly generate large amounts of natural-sounding text tailored to a specific audience.

Another type of phishing attack associated with the use of ChatGPT is when hackers create fake accounts on popular chat platforms like Discord and pose as customer representatives. The fake customer representative will then contact the customers who posted the concern and offer help. If users fall into the trap, cybercriminals will redirect them to a bogus website that tricks them into sharing personal information, such as their bank login details.

Protect your PC and bank account in the AI ​​era!

ChatGPT is a powerful and valuable tool that can answer many of your questions. But chatbots can also be used for malicious purposes, like generating phishing and malware messages.

The good news is that OpenAI continues to implement measures to prevent users from exploiting ChatGPT by making harmful requests. Then again, threat actors keep finding new ways to bypass those limitations.

To mitigate the potential dangers of AI chatbots, it's important to know their potential risks and best possible security measures to protect you from hackers.

« PREV
NEXT »