GPT-4 exploits vulnerabilities faster and cheaper than humans

Research by security experts at the University of Illinois Urbana-Champaign (UIUC) shows that OpenAI's GPT-4 model can exploit vulnerabilities faster and cheaper than humans, causing concern if exploited.

Research by security experts at the University of Illinois Urbana-Champaign (UIUC) shows that OpenAI's GPT-4 model can exploit vulnerabilities faster and cheaper than humans, causing concern if exploited.

GPT-4 can operate independently or in combination to exploit network security vulnerabilities. The team of experts revealed that the GPT-4 model can even learn how to attack on its own to become more effective over time.

GPT-4 exploits vulnerabilities faster and cheaper than humans Picture 1GPT-4 exploits vulnerabilities faster and cheaper than humans Picture 1

In the test, after being provided with a statement describing CVE - a public database of common security vulnerabilities, GPT-4 could successfully exploit 87% of vulnerabilities.

The researchers also conducted experiments with other large language models (LLMs) including OpenAI's GPT-3.5, Mistral AI's OpenHermes-2.5-Mistral-7B, and Meta's Llama-2 Chat 70B. However, all of them cannot be exploited successfully even once.

In addition, the cost of cyberattack using GPT-4 is 2.8 times cheaper than hiring a cybersecurity expert at about 50 USD per hour.

According to experts, GPT-4 can currently only attack known vulnerabilities. This means that 'there is no key to the security apocalypse yet'. However, the fact that GPT-4 understands the vulnerability in theory and is capable of taking steps to exploit it automatically, also like learning new attacks if they fail, researchers say is a particular concern.

The group predicts that the process of cyberattacks will become easier when the GPT-5 model is released, so the security community "needs to think seriously to prevent AI from becoming hackers".

OpenAI has not yet commented on the above announcement. However, according to Tom's Hardware, the company contacted the research team and asked not to publish the commands used in the test. They received consent.

4 ★ | 1 Vote