I wrote up some notes on two new papers on prompt injection: Agents Rule of Two (from Meta AI) and The Attacker Moves Second (from Anthropic + OpenAI = DeepMind + others) https://simonwillison.net/2025/Nov/2/new-prompt-injection-papers/
New prompt injection papers: Agents Rule of Two and The Attacker Moves Second

Two interesting new papers regarding LLM security and prompt injection came to my attention this weekend. Agents Rule of Two: A Practical Approach to AI Agent Security The first is …

Simon Willison’s Weblog
@simon With the rule of two, isn’t the combination of untrusted inputs and changing state in an agent potentially quite dangerous already, even without the access to private data? (Disclosure: didn’t read the paper, just your post)