I'm curious to hear what security folks think of the Anthropic's disclosure that criminals used its infrastructure to "conduct a scaled data extortion operation" compromised 17 organizations around the world. Anthropic gives the impression its AI automated the gamut of activities, from reconnaissance to initial access to malware development to data exfiltration to extortion analysis and ransom note development.

I remain skeptical that an LLMs would give extortionists much of an edge over more traditional means. Seems like the AI would introduce as many problems as it solves. Am I just being an AI curmudgeon or is my skepticism justified?

https://www-cdn.anthropic.com/b2a76c6f6992465c09a6f2fce282f6c0cea8c200.pdf

@dangoodin reads like marketing BS, in the same vein as "oh no we're really scared our AI might become sentient and do bad things!" nonsense they ran with before.

generating phishing lures and ransomware code with an LLM isn't interesting at all - it's just automating crap that any vaguely competent human could do by hand. it's the equivalent of saying "they used scapy instead of wireshark".

@dangoodin and since the author is the absolute opposite of impartial you have to wonder which details of these cases were selectively reframed or elided in the report.
@dangoodin ultimately the whole thing is a distraction from the core problem: states aren't doing enough to go after ransomware groups and their enablers (primarily cryptocurrency exchanges) and make harsh examples of them to quell the problem. the risk has to be made more than commensurate to the reward to make it stop.