Significant raise of reports (on the Linux Kernel Mailing List) https://lwn.net/Articles/1065620/

Here's something I think we all will have to contend with, whether you're an AIgen enthusiast or not: attacking is easier than defending, and these things don't get tired and they *are* very good at finding exploits. None of us will be able to ignore that, and we will probably have to listen to real genuine reports from them, even if we reject AIgen input.

However, I don't think that's actually the right solution, and I don't think it's sustainable. 🧵

Significant raise of reports [LWN.net]

The fact of the matter is, most vulnerabilities fall under extremely common patterns, with known solutions:

- Confused deputies: capability security can fix/contain this in many cases, more on that later
- Injection attacks: primarily caused by string templating, using structured templating also fixes this (quasiquote, functional combinators, etc)
- Memory vulnerabilities: solved by memory-safe languages, and yes that includes Rust, but it also includes Python, Scheme/Lisp, etc etc etc

There are other serious vulnerabilities, such as incorrectly written or used cryptography, and others from there, but my primary point is: most damage can be either avoided in the first place or contained (especially in terms of capability security for containment)

And... patching AIgen patches is going to get tough and tiring... (cotd...)

I don't think human reviewers are going to be able to keep up with the number of vulnerabilities we're seeing appear. I really don't. Humans won't be able to review at scale, and I also think that there's serious risks for blindly accepting AIgen patches, which for critical infrastructure could also be a path to *inserting new* vulnerabilities.

We need to attack this systemically.

I have more to say. More later. But that's the gist for now.

@cwebber is that the AI that is trained on millions of lines of vulnerabilities

@thomasfuchs Yep!

As said, attacking is easier than defending :)