I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another

But, the agents installed weren't given instructions to *do* anything yet.

Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.

I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.

A GitHub Issue Title Compromised 4,000 Developer Machines

A prompt injection in a GitHub issue triggered a chain reaction that ended with 4,000 developers getting OpenClaw installed without consent. The attack composes well-understood vulnerabilities into something new: one AI tool bootstrapping another.

I wrote a blogpost on this: "The first AI agent worm is months away, if that" https://dustycloud.org/blog/the-first-ai-agent-worm-is-months-away-if-that/

People who are using LLM agents for their coding, review systems, etc will probably be the first ones hit. But once agents start installing agents into other systems, we could be off to the races.

The first AI agent worm is months away, if that -- Dustycloud Brainstorms

@cwebber "Would you still prompt me if I was a worm? πŸ₯ΊπŸ‘‰πŸ‘ˆ"
@faoluin well I still prompt @vv
@cwebber @faoluin @vv isn't vae a vvorm?
@bean @cwebber @faoluin aren't vae :P
@vv @cwebber @faoluin ah, excuse me, your vvnesses