If your Open Source project sees a steep increase in number of high quality security reports (mostly done with AI) right now (#curl, Linux kernel, glibc confirmed) please tell me the name of this project.

(I'd like to make a little list for my coming talk on this.)

@bagder

Just so I understand this correctly...
We don't want machine generated vulerability reports...

...so we can leave our #foss projects vulnerable to hackers who are not constrained by ideology in their sploits using #Ai ?

Yeah, that tracks with the current majority of #infosec "professionals" letting the Rome burn while they roast the marshmallows, feeling super pure and superior.

@n_dimension sorry, I don't understand what you're talking about.

@bagder

Your talk is going to be super fun.
Send me a link please!

@n_dimension @bagder The projects typically want security/bug reports, not computer generated words that *look* like security/bug reports.

Same reason you don’t want parrot operating your air traffic control tower radio. Do you want an air traffic controller or a parrot that sounds like an air traffic controller? Do you trust the parrot to safely direct planes according to aviation regulations?

@ClickyMcTicker @bagder

Even a broken clock is right twice day.

Lucky then the project maintainers don't have to be bothered by minutea of securing their projects with automation...

...because #blackhats certainty don't have the same reservations.

#Ai is a new attack surface and acting irrational and emotional towards it is incomprehensible

#infosec

@n_dimension @ClickyMcTicker @bagder a broken clock is not “right”, because its value as a timepiece is nonexistent, because there is no way of telling *when* it is right.

@RoganDawes @ClickyMcTicker @bagder

Time flows independent of the perception of the observer, therefore the timepiece is correct twice as time in the 24 period is linear and constant.

@n_dimension @ClickyMcTicker I’d argue that a clock that cannot be relied upon to provide a reasonably accurate time 99% of the time (modulo replacing a battery or similar) is useless 100% of the time.

@n_dimension @bagder let the attackers spend their time sifting through AI slop trying to find legitimate vulnerabilities. The defenders have a difficult enough time dealing with real, validated vulnerabilities.

If you want to spend your time proving us wrong, feel free to run your favorite LLM against a FOSS tool, then manually validate that what it spits out is legitimate by writing a proof-of-concept that is exploitable in the real world. If you do that, I’m sure the FOSS project of your choosing wil fix it.

The issue is that the flood of AI generated false positives are more than the all volunteer team of folks supporting a FOSS project can handle. aI is great at writing convincing slop. It has not demonstrated its ability to consistently find legitimate vulnerabilities.

If you disagree, prove us wrong… but you do the validation yourself. Don’t just spit out AI slop and make someone else do that work.

@mathaetaes @bagder

I'll just leave this here
https://news.ycombinator.com/item?id=47633855

...there is no "sifting" here

Claude Code Found a Linux Vulnerability Hidden for 23 Years | Hacker News

@n_dimension @bagder Tell that to the FOSS maintainers who receive hundreds of fully AI-generated "vulnerability" reports that all turn out to be false positives.

If you want to use AI to find a bug, go for it. Validate the bug. Write a proof-of-concept (or have AI do it if you're not capable) and test it yourself. If your proof-of-concept achieves the desired results, then submit the bug and the POC.

There are people just haphazardly feeding FOSS baselines into local AI and asking for bugs, then submitting whatever their LLM tells them without validating that it's correct. This effectively floods the maintainers with false positives and makes it very difficult for legitimate bug reports to get through.

Also, just because Claude found a bug doesn't mean it didn't also report 100 false positives before it found a real one. Given the effort it takes to triage a bug report, allowing any random yahoo with a keyboard to blindly submit AI-generated slop equates to enabling a DDoS on your bug triage staff. It's not sustainable.

@n_dimension @bagder Just so I understand this correctly...

you don't

@goedelchen @bagder

Then assplain it to me kid

@n_dimension @bagder first you change your tone.

Then please explain which part of "If your Open Source project sees a steep increase in number of high quality security reports (mostly done with AI) right now (#curl, Linux kernel, glibc confirmed) please tell me the name of this project. " you don't understand resp. where do you see something indicating not wanting machine generated reports.

@goedelchen @bagder

Are these legit sploits or noise?

@n_dimension @bagder I asked "where do you see something indicating not wanting machine generated reports"

Can you please answer that question.