There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

You may now resume doom scrolling. Thank you

A couple people seem very invested in me being wrong about this assessment. All I can say is that this would be the first time I have misclassified an AI claim as bullshit

So here's the other thing that bothers me about all this. Regardless of the eventual results, this thing they're doing is *incredibly* resource intensive. They routinely spend billions of dollars on training these models, and billions more on operating them. It's not simple to parse out what fraction of that is directly attributable to the massive scale vuln finder/fabricator. But for the sake of argument lets just pick a plausible number, and call it 50-100 million dollars.

What could we have gotten for 50-100 million dollars of sponsorship for security audits? Prior to this, the largest single investment into FOSS security I'm aware of was the 2015 audit of openssl, after the heartbleed incident. It's hard to find precise costs for that, but I found a few sources estimating 1.2 million dollars, and that is arguably the most security critical piece of software in the world.

But suddenly there's 100x more resources available to do this work, now that producing the artifact can be done with stolen labor? Now that they can externalize the cost of false positives onto the already mostly unpaid maintainers of these projects? Even if their claims are true, which we have no reason to believe and very good reason not to, it's still a travesty

@jenniferplusplus 100 million dollars of sponsorship for FOSS project security audits doesn't sell a promise that soon all the humans can be fired.

@jenniferplusplus while I agree with the "AI companies are mostly full of shit" part, this would be the first kind of announcement like this I am taking semi-seriously.

Here's what's been happening the last couple of months, and this is with _current_ models. There are step functions at play, and I think the step function from "at least some skill needed to wield an LLM to find security issues" to "everybody with a $200 can exploit every OS/browser out there" should be considered very carefully.

Nicholas Carlini saying he found more bugs in 2 weeks than in his entire career with Mythos is not something I can dismiss.

Or daniel stenberg, certainly someone with actual authority and experience compared to me showing the current situation:

https://mastodon.social/@bagder/116373716541500315

https://mastodon.social/@bagder/116362046377975050

@mnl I'm not sure what I'm supposed to do with this. It feels like it's meant to dispute something I'm saying, but this is the same dynamic. The actual cost of operating these tools is 50-100x greater than the vendors are charging, which the vendors are doing in the hope that it eventually becomes an inextricable part of all work, completely eliminating labor as a social power.

Your hypothetical looks very different when it's "everybody with $20,000 (per month) can exploit every browser/os out there." Which is actually true now. It was true 6 months ago. It's been true for as long as we've had software that you could identify vulnerabilities in whatever software you wanted by paying a generous salary to full time researchers.

That's not what capital chose to do. And it bothers me that everyone is just adopting the capitalist framing on every goddamn word these companies spit out, as long as one of those words is AI

@jenniferplusplus I don't think i made a hypothetical? I don't disagree with the rest, but I wouldn't call this announcement bullshit.

I don't think saying that LLMs have gotten scaringly good at finding vulnerabilities (not hypothetical) is adopting the capitalist framing, in fact it's something that as a person supporting opensource and right to privacy, needs to be taken pretty seriously, since we can assume that these tools are in the hands of the government.

There's a fair amount of people (and yes, "AI companies") combining more traditional approaches to vulnerability finding with small models with known externalities to do similar work, one example I could find (I'm not a security's person) as a direct reaction to the mythos announcement: https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jagged-frontier

AI Cybersecurity After Mythos: The Jagged Frontier

Why the moat is the system, not the model

AISLE
@mnl My point is that you're reading these things like a warning, where you should be reading them like a threat
@jenniferplusplus a threat to? My livelihood as a programmer? The industry? I agree. But it is not an empty threat (meaning, I'm pretty sure this is real and that they are not just putting up such a disclosure announcement for hype and boost).
@jenniferplusplus this is maybe more what i'm reacting to. don't dismiss this stuff too quickly and bathe yourself in false comfort. If you are working on software, there's a reasonable chance these things can do a significant chunk of your job better than you. That they can't necessarily do it all, or do so for an extravagant amount of resources doesn't change that. I also don't want to sound contrarian, I know I might be a bit too autistic in my communication style (and I'm just as frustrated and anxious and exhausted like the rest of us).

@mnl @jenniferplusplus you seem fucking exhausting and have a long history on your public profile of AI boosterism so it’s not surprising that your response to both my and Jennifer’s posts is bland hype that doesn’t respond to any of the facts we’ve put forth

oh we’ll be left behind if we don’t adopt this terrible crap? good. leave us the fuck alone.

@mnl @jenniferplusplus

> If you are working on software, there's a reasonable chance these things can do a significant chunk of your job better than you

No. They cannot.

But they can make me much better at my job, which is why I use them.

@mnl when a mafia boss walks into a shop and talks about how much of a shame it would be if something happened to the place, that's also not an empty threat. That's the whole point. You can choose to pay them off, or not. What you absolutely do not do is run to all of your neighbors and redeliver the same threat
@jenniferplusplus true, I hope that's not what I'm doing when I say "there's something to this and you need to pay attention to the impact of LLMs on security", even if I think anthropic is run by dangerous clowns (like you have mythos, and also your other stuff is maybe the most broken software I've ever used 🤣 )
@jenniferplusplus OpenSSL is important to the world. Software for which a CTO might be held responsible is important to that CTO. There should be more overlap, but there isn’t.
@jenniferplusplus They want to get rid of us. The price doesn't matter.