I'm curious to hear what security folks think of the Anthropic's disclosure that criminals used its infrastructure to "conduct a scaled data extortion operation" compromised 17 organizations around the world. Anthropic gives the impression its AI automated the gamut of activities, from reconnaissance to initial access to malware development to data exfiltration to extortion analysis and ransom note development.

I remain skeptical that an LLMs would give extortionists much of an edge over more traditional means. Seems like the AI would introduce as many problems as it solves. Am I just being an AI curmudgeon or is my skepticism justified?

https://www-cdn.anthropic.com/b2a76c6f6992465c09a6f2fce282f6c0cea8c200.pdf

@dangoodin reads like marketing BS, in the same vein as "oh no we're really scared our AI might become sentient and do bad things!" nonsense they ran with before.

generating phishing lures and ransomware code with an LLM isn't interesting at all - it's just automating crap that any vaguely competent human could do by hand. it's the equivalent of saying "they used scapy instead of wireshark".

@dangoodin and since the author is the absolute opposite of impartial you have to wonder which details of these cases were selectively reframed or elided in the report.
@dangoodin ultimately the whole thing is a distraction from the core problem: states aren't doing enough to go after ransomware groups and their enablers (primarily cryptocurrency exchanges) and make harsh examples of them to quell the problem. the risk has to be made more than commensurate to the reward to make it stop.
@dangoodin I don't know the full answer, but let's try a thought experiment. Assume that there's some level of competence necessary to pull off a crime like that, and that some group isn't up to that level. They try using AI to help.
Now, there have been plenty of reports and studies saying that code generated by AI systems is buggy, etc. But I've also heard people I consider to be seriously reputable say that it has helped them. Let's assume a normal distribution of attack code "quality" (whatever that is—maybe it's just suitability for this task). The mean is likely below the threshold needed for successful raids—but in that case, they're probably no worse off; they couldn't pull it off on their own anyway, and they're likely discounting the possibility of getting caught. If the competence tail is long enough, though, it will extend past the necessary competence level, in which case they win.
tl;dr: if the generated code is bad, they're no worse off, but if there's a reasonable chance it's good enough they win, which they wouldn't have done otherwise.
@SteveBellovin @dangoodin @rogeragrimes Yes, this is a form of the inherent Attacker's Advantage - they get to keep trying until an exploit works, no worries about breaking anything; they only have to find one way in, but the defense must close all the holes.

@dangoodin I imagine the long-term effects are more on volume than capability. All this stuff you can get help with from forums etc., but that involves effort/outreach. That delayed gratification can wear down someone acting more out of impulse than intention.

Having something that can often enough hand you plausible output skips that external feedback loop. A loner or small org spurred by a lack of impulse control can be emboldened by LLM assistance and move before rational thought kicks in.

@dangoodin from what I hear, @jerry and @lerg have been taking about this for weeks (months?)

At least from the editorial point of view

Edit: on their podcast

@dangoodin Mashable reports that Cybersecurity firm ESET said that it discovered the first-ever AI-powered ransomware, which it has dubbed PromptLock, Ai-driven malware which creates and executes LUA scripts on the fly. Yet Another AI malware tool being used ,,,

@dangoodin They designed a tool who's most profitable use-case is crime and that tool is being used for crime.

In the real world, knowingly or recklessly facilitating a criminal conspiracy is illegal.

Section 230 has legalized that exact same behavior on the internet.

Anthropic is part of the supply chain for cyber-fraud and legally they're allowed to be.

Section 230 took the internet away from users and handed it over to fraud cartels, money launderers and extortionists.