There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

You may now resume doom scrolling. Thank you

@jenniferplusplus I seriously doubt this is smoke and mirrors, recent models have improved significantly for cybersec and the industry is noticing:

https://mastodon.social/@bagder/116336957584445742

https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/

The industry consensus seems to be that there's going to be a torrent of vulnerabilities being found in all sorts of software, and they're not prepared to handle the blast radius. It's not surprising that Anthropic wants to give a select few a head start to tackle them. It would be nice if their token fund was open to all OSS projects to apply.

I'm also pressing "X doubt" that you spend months coordinating between AWS, Apple, Microsoft, Google, and the Linux Foundation to organise this just because your tool's code leaked online.

AI bug reports went from junk to legit overnight, says Linux kernel czar

Interview: Greg Kroah-Hartman can't explain the inflection point, but it's not slowing down or going away

The Register
@budududuroiu @jenniferplusplus I wouldn't give Anthropic's motives a lot of credit here but LLMs do make bug hunting much easier.
@mirth @budududuroiu @jenniferplusplus Tell that to all the open source repo maintainers who get spammed with fake, nonsensical bug reports generated by AI?

@jedimb They can... close submissions? Many projects already have. It's like a 2 second change.

@mirth @jenniferplusplus

@budududuroiu @mirth @jenniferplusplus Making bug fixing more difficult because legitimate reports get blocked alongside the noise.

@jedimb and the alternative is?

@mirth @jenniferplusplus

@budududuroiu @mirth @jenniferplusplus What we had just a few years ago.

@jedimb yeah well that ship has sailed long ago.

@mirth @jenniferplusplus

@budududuroiu @mirth @jenniferplusplus "The plague is here. Let's just live with it" does seem to be a recurring sentiment, but it doesn't change that it's a plague.

@jedimb norms are downstream from power. Current power balance is shifted towards frontier labs and hyperscalers, norms around personal computing (RAM prices) and open source software (AI slop floods) are dictated by them.

Moralising AI use with no power to back it up is useless, gatekeeping is power because it says "want to contribute to this project, abide by our rules"

https://www.joanwestenberg.com/the-case-for-gatekeeping-or-why-medieval-guilds-had-it-figured-out/

@mirth @jenniferplusplus

The case for gatekeeping, or: why medieval guilds had it figured out

Every open source maintainer I've talked to in the last six months has the same complaint: the absolute flood of mass-produced, AI-generated, mass-submitted slop requests have turned their repositories into a slush pile. The contributions look like contributions, they have commit messages, they reference issues and they follow templates etc.

Westenberg.
@budududuroiu @mirth @jenniferplusplus Goal post moved into a different dimension, I see.