The Register: In a newly released paper, 4 university computer scientists report that OpenAI's GPT-4 large language model (LLM) can autonomously exploit vulnerabilities in real-world systems if given a CVE advisory describing the flaw. 🔗 https://www.theregister.com/2024/04/17/gpt4_can_exploit_real_vulnerabilities/

GPT-4, said Daniel Kang, assistant professor at UIUC, in an email to The Register, "can actually autonomously carry out the steps to perform certain exploits that open-source vulnerability scanners cannot find (at the time of writing)."

#AI #LLM #GPT4 #OpenAI #vulnerability #CVE

OpenAI's GPT-4 can exploit real vulnerabilities by reading security advisories

While some other LLMs appear to flat-out suck

The Register