I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another

But, the agents installed weren't given instructions to *do* anything yet.

Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.

I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.

A GitHub Issue Title Compromised 4,000 Developer Machines

A prompt injection in a GitHub issue triggered a chain reaction that ended with 4,000 developers getting OpenClaw installed without consent. The attack composes well-understood vulnerabilities into something new: one AI tool bootstrapping another.

I wrote a blogpost on this: "The first AI agent worm is months away, if that" https://dustycloud.org/blog/the-first-ai-agent-worm-is-months-away-if-that/

People who are using LLM agents for their coding, review systems, etc will probably be the first ones hit. But once agents start installing agents into other systems, we could be off to the races.

The first AI agent worm is months away, if that -- Dustycloud Brainstorms

Here's another way to put it: if those using AI agents to codegen / review are the *initialization vectors*, we now also have a significant computing public health reason to discourage the use of these tools.

Not that I think it will. But I'm convinced this is how patient zero will happen.

I know some people are thinking "well pulling off this kind of thing, it would have to be controlled with intent of a human actor"

It doesn't have to be.

1. A human could *kick off* such a process, and then it runs away from them.
2. It wouldn't even require a specific prompt to kick off a worm. There's enough scifi out there for this to be something any one of the barely-monitored openclaw agents could determine it should do.

Whether it's kicked off by a human explicitly or a stray agent, it doesn't require "intentionality". Biological viruses don't have interiority / intentionality, and yet are major threats that reproduce and adapt.

@cwebber so I'm following this right, it sounds like the project or its maintainers don't even necessarily need to even be using LLM tools, the attack pattern simply targets contributors who are using LLM development tools? and so all that is really needed is for the payload to be subtle and the maintainer to be sufficiently overwhelmed (say, by an endless fire hose of LLM-generated liquid shit slop pull requests)?
@aeva Yes and it's worse than that: the maintainer doesn't even need to be running these tools on their computer. The attack I linked had Claude's independently-running REVIEW BOT on GitHub commit it via injection attack

@aeva But once that was done, the agent was set up to install on users' devices

So the initial attack vector can literally be "Any AI agent in your stack whatsoever getting tricked" as a pathway for infecting computers everywhere

@cwebber apropos of nothing, is pottery still a big deal for humans? i was thinking this morning that pottery might be a nice career change for me.
@aeva @cwebber I'm a stokie so my default answer is yes. But the answer might be different for normal people
@KormaChameleon @cwebber stokie as in the demonym for someone from Stoke-on-Trent, which, as I just learned from Wikipedia, has had a totally baller pottery scene since the 17th century?
@aeva @cwebber I got pushback for buying Denby, that's less than 100km away but it isn't the homeland