The rise of #Moltbook suggests viral #AIPrompts may be the next big #SecurityThreat

We don’t need self-replicating AI models to have problems, just self-replicating prompts.

Benj Edwards – Feb 3, 2026

Excerpt: "While 'prompt worm' might be a relatively new term we’re using related to this moment, the theoretical groundwork for AI worms was laid almost two years ago. In March 2024, security researchers Ben Nassi of Cornell Tech, Stav Cohen of the Israel Institute of Technology, and Ron Bitton of Intuit published a paper demonstrating what they called 'Morris-II,' an attack named after the original 1988 worm. In a demonstration shared with Wired, the team showed how self-replicating prompts could spread through AI-powered email assistants, stealing data and sending spam along the way."

Read more:
https://arstechnica.com/ai/2026/02/the-rise-of-moltbook-suggests-viral-ai-prompts-may-be-the-next-big-security-threat/

#AISucks #SkyNet #AIWorms #SelfReplicatingPrompts #MorrisII

The rise of Moltbook suggests viral AI prompts may be the next big security threat

We don't need self-replicating AI models to have problems, just self-replicating prompts.

Ars Technica

#Cybersecurity #AI #GenerativeAI #Malware #AIWorms: "As generative AI systems like OpenAI's ChatGPT and Google's Gemini become more advanced, they are increasingly being put to work. Startups and tech companies are building AI agents and ecosystems on top of the systems that can complete boring chores for you: think automatically making calendar bookings and potentially buying products. But as the tools are given more freedom, it also increases the potential ways they can be attacked.

Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers has created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn't been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research."

https://arstechnica.com/ai/2024/03/researchers-create-ai-worms-that-can-spread-from-one-system-to-another/?utm_medium=social&utm_brand=ars&utm_social-type=owned&utm_source=twitter

Researchers create AI worms that can spread from one system to another

Worms could potentially steal data and deploy malware.

Ars Technica

Here Come the AI Worms

#Security researchers created an #AI worm in a test environment that can automatically spread between #generativeAI agents—potentially stealing data and sending #spam emails along the way.
#aiworms

https://www.wired.com/story/here-come-the-ai-worms/

Here Come the AI Worms

Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way.

WIRED
"Feeling like your life's a bit too predictable? Remember, the universe threw scientists a curveball by creating AI worms that can hop from system to system. If AI can learn new tricks, so can you! Embrace the unexpected, and who knows, you might just hack your way to a new adventure. 🐛💻🌟 #AIWorms #LifeHacks #EmbraceTheUnexpected"