RE: https://mastodon.social/@sbrunthaler/115928978574847952
The funny thing about this piece is that it strongly places exploits in the category of spam messages — they don't have to be correct, robust, or legible, they just have to generate the desired result enough of the time. Given that, I'll say that I do actually see a serious use for AI code generation in the development process for the first time: as a fuzzing tool to run against human-written code that does need to be correct, robust, and legible.
Amusingly, I predict that this use case is, in a decade or two if these tools are still used, significantly slow progress in the art of exploitation. You don't get brilliant exploit writers without a large pool of folks spending years honing their craft, not at the highest tiers but at the lowest. Sure, we have a bunch of folks already at the top of the field who will keep working for another twenty years, but no one will be coming up after them. Eventually, stagnation and likely with it, a resurgence in poorly thought out mitigations just complex enough that the reasoning work required to create novel exploitation techniques will be forever out of reach of LLM-category models.
That said, there's a useful lesson and incentive structure shift for development teams here too, coming from the new assumption that all exploitable vulnerabilities will have readily available exploits, regardless of whether or not creating them would otherwise be economically viable. That should, hopefully, push teams toward development styles that prioritize affordable guarantees of functional correctness — memory- and type-safety, use of parser and state machine generators and type-based functional correctness checks, and strong test suites.
Sadly, the appeal of much lower-effort generated code of unknowable correctness will likely steer most development teams in the opposite direction; hopefully strict liability regulations for commercial software can correct this pressure.