The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good.

I'm spending hours per day on this now. It's intense.

@bagder
At least it's spending time on stuff that's somewhat worth while?
But still, it's a DDOS on maintainers
@dirkhh it's actually really hard to complain when the reports are good, but yeah we're still humans who need to deal with all this
@bagder @dirkhh do those reports come also with a way to fix them? Are patches attached too? In what percentage of them?
@pemensik @dirkhh very few have any attempts at fixes
@bagder @dirkhh which is my opinion the most important problem with AI reports. If they can use AI to find issues, it should come with fixes as well. Ideally a test case attached for it as well. Then the burden on human maintainers would be only the review part, not everything else.
@pemensik @dirkhh the AIs are still better at finding problems than fixing them, in my experience
@bagder @dirkhh sure, that is expected. But flood of many reports need to be processed by maintainers. If they came together with a fix, is should be less work to the maintainer, in theory. If the proposal is decent enough. Cloning good maintainers is much harder than cloning AI analyzers. Increasing number of reports won't help without increasing number of entities fixing bugs. If you increased the first, please try also the latter.