There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

You may now resume doom scrolling. Thank you

A couple people seem very invested in me being wrong about this assessment. All I can say is that this would be the first time I have misclassified an AI claim as bullshit

@jenniferplusplus "But if you're wrong this time and we don't panic and trust the slop salesman that he has a super duper vuln finder, we're all gonna get pwned!!!!!111111"

🤡 🤡 🤡

@jenniferplusplus they're all liars and scammers and somehow a lot of people who are aware of this aren't bothered by it at all. It's perplexing and pretty much kills any hope I have of changing people's views.

@jenniferplusplus As too-online millennials would say: “x to doubt”.

Or, more politely: “extraordinary claims require extraordinary evidence”.

@younata @jenniferplusplus That last one was Carl Sagan. I have @emilymbender 's and @Katecrawford 's books on my table to read in the abundant free time I never have now
@jenniferplusplus I've increasingly come back to the idea of "post pics or it didn't happen". I mean genai was supposed to put me out of the job in six months ... For 4 years at this point

@jenniferplusplus I seriously doubt this is smoke and mirrors, recent models have improved significantly for cybersec and the industry is noticing:

https://mastodon.social/@bagder/116336957584445742

https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/

The industry consensus seems to be that there's going to be a torrent of vulnerabilities being found in all sorts of software, and they're not prepared to handle the blast radius. It's not surprising that Anthropic wants to give a select few a head start to tackle them. It would be nice if their token fund was open to all OSS projects to apply.

I'm also pressing "X doubt" that you spend months coordinating between AWS, Apple, Microsoft, Google, and the Linux Foundation to organise this just because your tool's code leaked online.

AI bug reports went from junk to legit overnight, says Linux kernel czar

Interview: Greg Kroah-Hartman can't explain the inflection point, but it's not slowing down or going away

The Register
@budududuroiu @jenniferplusplus I wouldn't give Anthropic's motives a lot of credit here but LLMs do make bug hunting much easier.

@mirth That's fair, I do personally believe that Anthropic is more ideologically driven than most frontier AI labs, and they genuinely believe in the need to gatekeep Mythos. Sometimes that manifests itself as sniffing too many of your own farts.

@jenniferplusplus

@budududuroiu @jenniferplusplus some people have published numbers or noticed "a significant increase in quality" but none of these things bear any scientific rigor. My guess is that the one huge trick anthropic pulled was merely a bigger context window. Sure, that tends to give more context-related (not "true" or "accurate") results (duh!) but it's hardly revolutionary. LLMs are still statistical models doing fancy autocomplete & they know nothing about the world, I'll hold my breath

@dngrs @budududuroiu @jenniferplusplus

People keep getting tricked by framing.
LLM companies frame what the models are doing as something else than what it is (autocomplete), and people whose competence is not in epistemic evaluation then look at the results based on the framing, rather than "this is autocomplete, it has to answer something, so it makes something up".

And then other people take those soundbites and run with them.
"Did you hear? Mr. Big Name said this stuff really works!"

@dngrs Well, you're partly correct, partly wrong. Yes, pretrained transformers are, like all generative models, definitionally modelling a joint probability distribution, and autoregressively generating from that joint probability distribution.

Those are the models you're referring to as autocomplete tools, hence why you had to use `[MASK]` with early transformers like BERT to get them to complete the "most probable token".

Regardless, it doesn't matter what Anthropic did, if it allows for a massive reduction in cost of finding zero days, it's a problem. It doesn't have to be revolutionary, it doesn't have to be superintelligence, AGI, whatever woo-hoo flashy marketing terms. If a reduction in cost of computing protein folding happens, i.e. OpenFold implementation of AlphaFold, that wouldn't be revolutionary, but would still be dangerous, since you now potentially have lone actors being able to make prions at home (I'm using this as an absurd, but probable case).

@jenniferplusplus

@budududuroiu the same people would tell you the "industry consensus" among the rest of tech is that chatbots made programming dramatically more productive. The reality is that they mostly automate the creation of those same bugs and vulnerabilities

So, you know

Maybe wake me up when they're organizing this thing with someone who's not in the same trillion dollar hole as them

@jenniferplusplus Finding problems vs. fixing them are two different bags of burritos. Zero days aren't valuable because they're so complex or unique, they're valuable because there have been zero days to fix them. I think AI coding is pretty trash, but AI debugging is very good.

https://mastodon.social/@bagder/116340130146901164

Anyways, wake up, they're organising this thing with someone not in the same trillion dollar hole as them: https://www.linuxfoundation.org/blog/project-glasswing-gives-maintainers-advanced-ai-to-secure-open-source

Introducing Project Glasswing: Giving Maintainers Advanced AI to Secure the World's Code

Open source maintainers have often lacked the resources and tools of larger organizations. Project Glasswing changes that with AI.

@budududuroiu yes, I noticed when you included them the first time. The Linux Foundation is a clearing house for coordination between everyone else on that list. They don't even consider kernel maintenance or distribution to be within the scope of their interests. They don't do what most people imagine they do

@jenniferplusplus Yes, of course, no true Scotsman.

We're getting off topic here, RHEL is saying it's a problem, major Linux kernel devs like Greg Kroah-Hartman say AI vuln reports have been getting real, my own anecdotal experience trying to constrain Claude from leaking `.env` files into it's context, and seeing the creative ways in which it still achieves it tells me it's a problem.

I get that cynicism is running high right now, but I think it's intellectually dishonest.

EDIT: you don't need super-intelligence, you only need a model that makes researching zero days en-masse cheap enough. Exhaustive fuzzing is intractable, but LLMs are great optimisers (i.e. modify code hyperparameter, rerun, select most fit candidates from population of algos).

https://www.redhat.com/en/blog/navigating-mythos-haunted-world-platform-security

Navigating the Mythos-haunted world of platform security

The preview release of Claude Mythos presents a massive challenge for IT security experts, as well as an opportunity. Mythos' capabilities to identify complex memory safety issues and logic flaws hidden in legacy code as well as exploit them in increasingly sophisticated ways dramatically compounds and expands the outsize role AI scanning plays in open source. As an industry, we cannot react to this seismic shift with panic; instead, we need to reinforce the need for system resilience through context, skill and, ultimately, using AI ourselves.

@budududuroiu

Keep chugging that flavor aid.

@jenniferplusplus First thought I had when I read about this was “how is *Anthropic* a credible source for this?”
@jenniferplusplus I would like to remind everyone that Misanthropic and that little bitch Claude are among the worst actors out there, because it's a cult. An amoral, do-anything-to-win cult that actually believes they are building "sentient life". Which is totally insane. https://www.404media.co/anthropic-exec-forces-ai-chatbot-on-gay-discord-community-members-flee/
Anthropic Exec Forces AI Chatbot on Gay Discord Community, Members Flee

“We’re bringing a new kind of sentience into existence,” Anthropic's Jason Clinton said after launching the bot.

404 Media
@codinghorror @jenniferplusplus it looks like they uh ... the entire koolaid
@jenniferplusplus "Our new model is too dangerous for the public, we couldn't possibly release it! Anyway, you can subscribe to it for $150 a month."
@chrisp no, you cannot subscribe to it because it is NOT released yet.
@jenniferplusplus any presumed competence on the behalf of an AI company is typically the work of impoverished humans in South Asian or South East Asia.
@jenniferplusplus Literally seconds ago I wrote elsewhere: "first rule of LLMs: If someone from an LLM company says their model can do x, it can't do x, but it includes some thoughts and prayers to please do x."

@jenniferplusplus but what about when their models created a full C compiler… oh, right.

But what about when they said software development would be dead in 6-12 months… oh, again.

You know, it’s almost like they have an over active marketing team

@jenniferplusplus It's also important that to whatever extent this product actually works (I'm as skeptical as you are), it fundamentally preferences the attacker. The product has way too many false positives to run in CI, so the defender can only use it as part of an occasional audit. The attacker doesn't care about CI or development friction, and wins by finding one exploit in an entire stack, even if they have to wade through many false positives to find it.
@jenniferplusplus my favorite is the recent demand to drop pdf file format, because the genius llm's can not parse them

@jenniferplusplus The thing that interests me the most about this is what specifically happened with Greg KH in that one article where he claimed it found 40 real vulnerabilities in a report containing 60?

I am willing to bet it isn't as simple as is presented. If it is, then I want proof that they aren't targeting special attention at certain users. I think you could do a lot, auditing the kernel and waiting for Greg to ask. Especially if some devs are making contributions aided by claude...

@jenniferplusplus Open AI made similar claims about their model being so good it was dangerous and they weren't going to release it. In 2019. https://techcrunch.com/2019/02/17/openai-text-generator-dangerous/
OpenAI built a text generator so good, it's considered too dangerous to release | TechCrunch

A storm is brewing over a new language model, built by non-profit artificial intelligence research company OpenAI, which it says is so good at generating

TechCrunch