Also, if your management has seen the widely reported "80% of Ransomware Attacks are AI-Driven" headline published by MIT, it was paid for by a vendor.
The paper is absolutely ridiculous. It describes almost every major ransomware group as using AI - without any evidence (it's also not true, I monitor many of them). It even talks about Emotet (which hasn't existed for many years) as being AI driven.
It cites things like CISA reports for GenAI usage.. but CISA never said AI anywhere.
The PDF is here and is absolutely crackers, MIT should be ashamed of themselves for letting this out the door.
https://cams.mit.edu/wp-content/uploads/Safe-CAMS-MIT-Article-Final-4-7-2025-Working-Paper.pdf
No, REvil don't use AI to set ransom demands, CISA never said that, none of the sources cited said that, and they were running before the GenAI craze. It's just absolute nonsense, every page is.
I have asked MIT these questions:
1) Is this paper being retracted?
2) How much money was paid to MIT Sloan by Safe Security?
3) What part did Safe Security play in the paper creation and review?
The Financial Times today links to the now deleted MIT study https://www.ft.com/content/56cb100e-7146-488f-aae5-55304ae0eff6
If anybody knows anybody at the FT, could we please tell them it's fake?
MIT have also silently, without noting on the pages, started rewriting their website to remove references to their own work. They've also changed the URLs of the pages to remove references.
Left, before: https://archive.ph/SckSr
Right, after: https://mitsloan.mit.edu/ideas-made-to-matter/80-ransomware-attacks-now-use-artificial-intelligence
I'm coining another term - cyberslop.
Cyberslop is where trusted institutions use baseless claims about cyber threats from generative AI to profit, abusing their perceived expertise.
I'm also starting a series about it, called CyberSlop. Much more soon.
New by me - CyberSlop, where I look at orgs misusing GenAI fears to take from their own customers.
First threat actor - MIT and Safe Security go full cyberslop.
https://doublepulsar.com/cyberslop-meet-the-new-threat-actor-mit-and-safe-security-d250d19d02a4
The whole report is like that btw. It even lists ransomware groups who disbanded before the GenAI stuff as using GenAI. It also cites no evidence for any of the groups using GenAI.
I suspect Safe Security authored the problematic bits but to be confirmed. Safe Security’s website is absolutely full of absolute nonsense, reads like it is AI generated, and has AI artwork of Chad AI robots on it.
@GossiTheDog Nah, just looks like the usual clickbait article bullshit to me though.
Turn brain off, write some garbage, maybe have an LLM generate parts or all of it for you and post it without looking at it...
@GossiTheDog I mean that is the new marketing trend, right? Oh this application does OCR..the same OCR we have done for like 15 years... That's AI...you have an app that has a ML program to recognize hotdogs and not hotdogs....that's AI... The computer did spell check..you guessed it AI!
Neural networks not needed

Fortunately, critical thinking is one of the first things that regular use of "AI" slop helps smooth away. Problem solved!
@GossiTheDog I wonder how Safe’s lawsuit with Security Scorecard is going.
https://www.bankinfosecurity.com/securityscorecard-accuses-safe-security-trade-secret-theft-a-25423
@GossiTheDog just dropping this here, and wondering if there’s going to be any awkward moments there
https://safe.security/resources/events/safe-at-the-10th-annual-fair-institute-conference/