Security researchers have discovered a highly effective new jailbreak that can dupe nearly every major large language model into producing harmful output, from explaining how to build nuclear weapons to encouraging self-harm. As detailed in a writeup by the team at AI security firm HiddenLayer, the exploit is a prompt injection technique t
https://www.fischerman.ch/?p=535507
#Hacker #KIAI #OnlineSecurityHistory



