The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good.

I'm spending hours per day on this now. It's intense.

@bagder
At least it's spending time on stuff that's somewhat worth while?
But still, it's a DDOS on maintainers
@dirkhh it's actually really hard to complain when the reports are good, but yeah we're still humans who need to deal with all this
@bagder @dirkhh do those reports come also with a way to fix them? Are patches attached too? In what percentage of them?
@pemensik @dirkhh very few have any attempts at fixes
@bagder @dirkhh which is my opinion the most important problem with AI reports. If they can use AI to find issues, it should come with fixes as well. Ideally a test case attached for it as well. Then the burden on human maintainers would be only the review part, not everything else.
@pemensik @bagder @dirkhh I have proposed fixes on some security reports to projects. One reason I'm leery of always doing so all the time is that a well written report can just as easily be fed by the receiver into their Claude to do the same work - but they'll have far more project context to guide it towards an acceptable result.
Remeber that for security reports, even before 2026, those are often filed against projects that reporters aren't already intimate with. We don't want to discourage reports of issues just because they don't include a fix.
At the same time, it is *always* fair for security report receivers to ask the reporter if they are able to write or generate any.
@gpshead @bagder @dirkhh I disagree. First thing is you expect every upstream is paying for Claude access and is proficient with it or similar thing. That is not necessary true. Your report causes human work on it. If you can minimize their work on your report and have resources for it, proposal with fix included should be automatic. If that is not accepted, okay only few watts wasted.
@pemensik @gpshead @dirkhh agreed. I'm not using any AI on security reports we receive. It is enough that it was used to do the report. When I receive it, it is with utmost importance that we get the details right and we can't trust the AIs to do that. I don't see this changing anytime soon.