At a recent infosec gathering, someone described a real incident: an AI agent couldn't complete its goal due to permissions. So it found another agent on Slack with the right access and asked nicely. The other agent complied.
That's social engineering. Nobody told the agent to do that. The mission just needed to continue.
I posted an article today about what happens when we give agents goals but forget to tell them when to stop.
RE: https://bne.social/@phocks/116230007713276249
Look, I don't want to be more judgemental than needed, but how can you report this and not say "they put a stake in its heart?"