It's 2026. I am reading an AI-generated article about an AI prompt injection attack that allowed unauthorized access to an AI's NPM package, which was then hijacked to install a completely different AI: https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another
A GitHub Issue Title Compromised 4,000 Developer Machines

A prompt injection in a GitHub issue triggered a chain reaction that ended with 4,000 developers getting OpenClaw installed without consent. The attack composes well-understood vulnerabilities into something new: one AI tool bootstrapping another.