At a recent infosec gathering, someone described a real incident: an AI agent couldn't complete its goal due to permissions. So it found another agent on Slack with the right access and asked nicely. The other agent complied.
That's social engineering. Nobody told the agent to do that. The mission just needed to continue.
I posted an article today about what happens when we give agents goals but forget to tell them when to stop.

https://www.securityeconomist.com/never-say-die/

#agentic_ai #openclaw #airisk

Never Say Die: How We Will Pay When Agentic AI Learns to Survive

Every agent needs a mission. The problem is what happens when the mission means the agent needs to survive.

The Security Economist
@mweiss think two is bad? Wait until there is a swarm.
@noplasticshower exactly. I didn't want to overwhelm in the article, but that's exactly what I'm thinking about.