The first one made public...
https://thehackernews.com/2025/08/researchers-uncover-gpt-5-jailbreak-and.html?m=1
Researchers bypass GPT-5 guardrails using narrative jailbreaks, exposing AI agents to zero-click data theft risks.