I've been struggling lately with how to properly secure AI tool usage within $DAYJOB.

I really like this article [1] "New prompt injection papers: Agents Rule of Two and The Attacker Moves Second" by Simon Willison. It references a post [2] from Meta "Agents Rule of Two: A Practical Approach to AI Agent Security".

This makes a lot of sense to me, and gives me hope of selling my coworkers on a process of approaching their usage of these tools.

Pick no more than two for any tool, and be careful when tools mix-and-match capabilities (i.e. MCP servers).

[1] https://simonwillison.net/2025/Nov/2/new-prompt-injection-papers/
[2] https://ai.meta.com/blog/practical-ai-agent-security/

#itsecurity #aitools #metaAI