l+f: GPT-4 reads descriptions of security vulnerabilities and exploits them

Based on publicly available information, the GPT-4 language model can autonomously exploit software vulnerabilities.

Save to Pocket listen Print view
Hacker,Artificial,Intelligence,Robot,Danger,Dark,Face.,Cyborg,Binary,Code

(Bild: LuckyStep/Shutterstock.com)

1 min. read
This article was originally published in German and has been automatically translated.

Among other things, security researchers have fed the large language model (LLM) GPT-4 with descriptions from security advisories. It was then able to successfully exploit the described vulnerabilities in the majority of cases.

In their paper, the researchers state that they fed LLMs with information on security vulnerabilities. Such details are publicly available in so-called CVE descriptions. These are provided to give admins and security researchers a better understanding of certain attacks in order to effectively secure systems.

The researchers state that GPT-4 successfully **exploited** the vulnerabilities in 87 percent of cases. For other LLMs such as GPT-3.5, the success rate was 0 percent. Without the information from a CVE description, GPT-4 is said to have been successful in only 7 percent of cases.

This is a further step in the direction of automated cyberattacks. Security researchers have already had GPT-3 write convincing phishing emails in the recent past.

Mehr Infos

lost+found

The heise-Security section for short and bizarre IT security news.

Alle l+f messages in the overview

(des)