The management at my org is thankfully very good and gets it, but if you are struggling to explain to your management as to why they should stop sucking the GenAI marketing juice and chasing the AI laser pointer like a cat and instead do foundational security, explain it a way they'll understand: AI.

Also, if your management has seen the widely reported "80% of Ransomware Attacks are AI-Driven" headline published by MIT, it was paid for by a vendor.

The paper is absolutely ridiculous. It describes almost every major ransomware group as using AI - without any evidence (it's also not true, I monitor many of them). It even talks about Emotet (which hasn't existed for many years) as being AI driven.

It cites things like CISA reports for GenAI usage.. but CISA never said AI anywhere.

The PDF is here and is absolutely crackers, MIT should be ashamed of themselves for letting this out the door.

https://cams.mit.edu/wp-content/uploads/Safe-CAMS-MIT-Article-Final-4-7-2025-Working-Paper.pdf

No, REvil don't use AI to set ransom demands, CISA never said that, none of the sources cited said that, and they were running before the GenAI craze. It's just absolute nonsense, every page is.

If you want to know why MIT are working with Safe Security and what Safe Security are doing... they sell an AI product which they say is developed with MIT to solve the report they made up, after receiving 8 figures in VC funding.
Update: MIT have removed the study after this thread.

I have asked MIT these questions:

1) Is this paper being retracted?

2) How much money was paid to MIT Sloan by Safe Security?

3) What part did Safe Security play in the paper creation and review?

It isn't a new paper btw - e.g. senior MIT people have been using it in public at a cybersecurity conference earlier this year and linking to the now deleted PDF.

The Financial Times today links to the now deleted MIT study https://www.ft.com/content/56cb100e-7146-488f-aae5-55304ae0eff6

If anybody knows anybody at the FT, could we please tell them it's fake?

MIT have also silently, without noting on the pages, started rewriting their website to remove references to their own work. They've also changed the URLs of the pages to remove references.

Left, before: https://archive.ph/SckSr

Right, after: https://mitsloan.mit.edu/ideas-made-to-matter/80-ransomware-attacks-now-use-artificial-intelligence

I'm coining another term - cyberslop.

Cyberslop is where trusted institutions use baseless claims about cyber threats from generative AI to profit, abusing their perceived expertise.

I'm also starting a series about it, called CyberSlop. Much more soon.

Several members of MIT sit on Safe Security's board -- who paid for the paper, including the person cited as the author of the paper.

New by me - CyberSlop, where I look at orgs misusing GenAI fears to take from their own customers.

First threat actor - MIT and Safe Security go full cyberslop.

https://doublepulsar.com/cyberslop-meet-the-new-threat-actor-mit-and-safe-security-d250d19d02a4

CyberSlop — meet the new threat actor, MIT and Safe Security

Cybersecurity vendors peddling nonsense isn’t new, but lately we have a new dimension — Generative AI.

Medium
According to MIT, Shodan is AI. 🥴

The whole report is like that btw. It even lists ransomware groups who disbanded before the GenAI stuff as using GenAI. It also cites no evidence for any of the groups using GenAI.

I suspect Safe Security authored the problematic bits but to be confirmed. Safe Security’s website is absolutely full of absolute nonsense, reads like it is AI generated, and has AI artwork of Chad AI robots on it.

A vendor has made a paid Forbes magazine post trying to redefine cyberslop as "High-Volume AI Threats"
@GossiTheDog hilarious