The rise of #Moltbook suggests viral #AIPrompts may be the next big #SecurityThreat

We don’t need self-replicating AI models to have problems, just self-replicating prompts.

Benj Edwards – Feb 3, 2026

Excerpt: "While 'prompt worm' might be a relatively new term we’re using related to this moment, the theoretical groundwork for AI worms was laid almost two years ago. In March 2024, security researchers Ben Nassi of Cornell Tech, Stav Cohen of the Israel Institute of Technology, and Ron Bitton of Intuit published a paper demonstrating what they called 'Morris-II,' an attack named after the original 1988 worm. In a demonstration shared with Wired, the team showed how self-replicating prompts could spread through AI-powered email assistants, stealing data and sending spam along the way."

Read more:
https://arstechnica.com/ai/2026/02/the-rise-of-moltbook-suggests-viral-ai-prompts-may-be-the-next-big-security-threat/

#AISucks #SkyNet #AIWorms #SelfReplicatingPrompts #MorrisII

The rise of Moltbook suggests viral AI prompts may be the next big security threat

We don't need self-replicating AI models to have problems, just self-replicating prompts.

Ars Technica

"On November 2, 1988, graduate student Robert Morris released a self-replicating program into the early Internet. Within 24 hours, the Morris worm had infected roughly 10 percent of all connected computers, crashing systems at Harvard, Stanford, NASA, and Lawrence Livermore National Laboratory. The worm exploited security flaws in Unix systems that administrators knew existed but had not bothered to patch.

"Morris did not intend to cause damage. He wanted to measure the size of the Internet. But a coding error caused the worm to replicate far faster than expected, and by the time he tried to send instructions for removing it, the network was too clogged to deliver the message.

"History may soon repeat itself with a novel new platform: networks of AI agents carrying out instructions from prompts and sharing them with other AI agents, which could spread the instructions further."

https://arstechnica.com/ai/2026/02/the-rise-of-moltbook-suggests-viral-ai-prompts-may-be-the-next-big-security-threat/

The rise of Moltbook suggests viral AI prompts may be the next big security threat

We don't need self-replicating AI models to have problems, just self-replicating prompts.

Ars Technica
Fantasia - The Sorcerer's Apprentice (Part 3)

YouTube