One reason to fix minor code issues the AI tools find is that if we don't, we know there's an army of "friendly helpers" who will soon report them as "probable security problems" down the line. Because they can run the AI tools too.
"researchers" can go to extreme lengths to argue for and claim there are vulnerabilities in code, but yet almost none of them ever works on actually fixing the issue. Whatever the assessment of the issue is.

@bagder oh this happened pre LLM to me too, when I ran a much smaller BBP

ugh

@bagder right now we're receiving droves of vuln reports for libssh, with a vulnerability already reported 4 times before we had time to push a security advisory. Some reports are very low effort but so far they mostly look genuine, but at some point they'll probably all be slop.

@bagder so proving their value solely as a Cassandra?

Instead, managers like receiving practical solutions, not problems.

I'm not really active as a dev now, but I've fixed, say, code injection bugs in a data serialization back-end. Most of the work was convincing folk that it was a real risk (with demos), and doing some 'proper dev' writing a robust API with low-level automated testing, to prove it fixed the whole class of exploits.

My theme here is that fixing exploits requires expertise, often where nobody else knows the goal. As a fixer, you have to be responsible for setting the standard, often beyond the optics of 'works for my one use case'.

And this completeness of approach is often under-appreciated, seen as an unnecessary burden. In truth, with good code, we're limiting future liability.

Liability is an aspect that AI slop disregards, and I think if we don't guard against it now, we're due lots of pain later.

@bagder Thank you for sharing. This sounds like a form of “beg bounty” I guess with recognition sought instead of money. I guess I shouldn’t be surprised.