LLM based agents cannot be secured if you 1) give them access to private data, 2) let them read untrusted content, and 3) allow them to communicate with the external world.

In his new blog post, Simon Willison writes a sharp & easy to understand summary about this "lethal trifecta for AI agents": https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/

M365 #Copilot hacks like #EchoLeak are nasty. But as Simon points out, once you combine different AI tools and #MCP to build your own agents, securing agents gets even harder.