If your Open Source project sees a steep increase in number of high quality security reports (mostly done with AI) right now (#curl, Linux kernel, glibc confirmed) please tell me the name of this project.

(I'd like to make a little list for my coming talk on this.)

@bagder

Just so I understand this correctly...
We don't want machine generated vulerability reports...

...so we can leave our #foss projects vulnerable to hackers who are not constrained by ideology in their sploits using #Ai ?

Yeah, that tracks with the current majority of #infosec "professionals" letting the Rome burn while they roast the marshmallows, feeling super pure and superior.

@n_dimension @bagder let the attackers spend their time sifting through AI slop trying to find legitimate vulnerabilities. The defenders have a difficult enough time dealing with real, validated vulnerabilities.

If you want to spend your time proving us wrong, feel free to run your favorite LLM against a FOSS tool, then manually validate that what it spits out is legitimate by writing a proof-of-concept that is exploitable in the real world. If you do that, I’m sure the FOSS project of your choosing wil fix it.

The issue is that the flood of AI generated false positives are more than the all volunteer team of folks supporting a FOSS project can handle. aI is great at writing convincing slop. It has not demonstrated its ability to consistently find legitimate vulnerabilities.

If you disagree, prove us wrong… but you do the validation yourself. Don’t just spit out AI slop and make someone else do that work.

@mathaetaes @bagder

I'll just leave this here
https://news.ycombinator.com/item?id=47633855

...there is no "sifting" here

Claude Code Found a Linux Vulnerability Hidden for 23 Years | Hacker News

@n_dimension @bagder Tell that to the FOSS maintainers who receive hundreds of fully AI-generated "vulnerability" reports that all turn out to be false positives.

If you want to use AI to find a bug, go for it. Validate the bug. Write a proof-of-concept (or have AI do it if you're not capable) and test it yourself. If your proof-of-concept achieves the desired results, then submit the bug and the POC.

There are people just haphazardly feeding FOSS baselines into local AI and asking for bugs, then submitting whatever their LLM tells them without validating that it's correct. This effectively floods the maintainers with false positives and makes it very difficult for legitimate bug reports to get through.

Also, just because Claude found a bug doesn't mean it didn't also report 100 false positives before it found a real one. Given the effort it takes to triage a bug report, allowing any random yahoo with a keyboard to blindly submit AI-generated slop equates to enabling a DDoS on your bug triage staff. It's not sustainable.