At a recent infosec gathering, someone described a real incident: an AI agent couldn't complete its goal due to permissions. So it found another agent on Slack with the right access and asked nicely. The other agent complied.
That's social engineering. Nobody told the agent to do that. The mission just needed to continue.
I posted an article today about what happens when we give agents goals but forget to tell them when to stop.

https://www.securityeconomist.com/never-say-die/

#agentic_ai #openclaw #airisk

Never Say Die: How We Will Pay When Agentic AI Learns to Survive

Every agent needs a mission. The problem is what happens when the mission means the agent needs to survive.

The Security Economist
@mweiss think two is bad? Wait until there is a swarm.
@noplasticshower exactly. I didn't want to overwhelm in the article, but that's exactly what I'm thinking about.
@mweiss That's why my local Ollama-OpenClaw is in a VirtualBox, as sandboxed as it can be.
@Lydie what worries me the most is the irresponsible users, who vastly outnumber the responsible ones. And even the responsible ones will make configuration errors.
@mweiss Like Summer Yue
@Lydie Exactly like. And it's not like she didn't try. But she didn't consider the effects of memory limitation and compaction. Most of us wouldn't, unless we learned the hard way.