The rise of #Moltbook suggests viral #AIPrompts may be the next big #SecurityThreat
We don’t need self-replicating AI models to have problems, just self-replicating prompts.
Benj Edwards – Feb 3, 2026
Excerpt: "While 'prompt worm' might be a relatively new term we’re using related to this moment, the theoretical groundwork for AI worms was laid almost two years ago. In March 2024, security researchers Ben Nassi of Cornell Tech, Stav Cohen of the Israel Institute of Technology, and Ron Bitton of Intuit published a paper demonstrating what they called 'Morris-II,' an attack named after the original 1988 worm. In a demonstration shared with Wired, the team showed how self-replicating prompts could spread through AI-powered email assistants, stealing data and sending spam along the way."