@bagder oh this happened pre LLM to me too, when I ran a much smaller BBP
ugh
@bagder so proving their value solely as a Cassandra?
Instead, managers like receiving practical solutions, not problems.
I'm not really active as a dev now, but I've fixed, say, code injection bugs in a data serialization back-end. Most of the work was convincing folk that it was a real risk (with demos), and doing some 'proper dev' writing a robust API with low-level automated testing, to prove it fixed the whole class of exploits.
My theme here is that fixing exploits requires expertise, often where nobody else knows the goal. As a fixer, you have to be responsible for setting the standard, often beyond the optics of 'works for my one use case'.
And this completeness of approach is often under-appreciated, seen as an unnecessary burden. In truth, with good code, we're limiting future liability.
Liability is an aspect that AI slop disregards, and I think if we don't guard against it now, we're due lots of pain later.