nobody confident in their own abilities is panicking
https://www.theregister.com/2026/02/23/claude_code_security_panic/?td=rt-3a
the people who are panicking are signaling.
nobody confident in their own abilities is panicking
https://www.theregister.com/2026/02/23/claude_code_security_panic/?td=rt-3a
the people who are panicking are signaling.
moreover, nobody who has ever tried to use any llm to do code stuff for hours/days/weeks at a time is panicking either.
even people who are deep experts in what they do, who use llms to do stuff day to day, have to put a brick in a tube sock, put that in another tube sock, and swing it hard to bash the llm in the face over and over again to get it to behave and obey. and often that workout takes as much time as not using an llm.
everyone shitting their pants is signaling.
fucking good.
"security as we know it" is pay to play, zero boundaries, fraught with grifters, liars and cheats, shitloads of friendly-fire, people buying cert bootcamps to get people fake creditiblity, overdependence on shit like the cissp, people with zero computer experience directing whole armies of super technical folks
let it end.
it desperately needs a reboot.
Don't hold back Viss. Tell us how you really feel. :-)
But seriously, to the point of the original article, yeah, no.
If I'm being very generous and allow that a "spicy linter" might be a halfway decent SAST (static application security testing) tool, that best case scenario would still be overwhelmed by the new and interesting security bugs introduced by their code generating brethren, "spicy autocomplete."
Full agree with Viss on the main point about folks with deep technical view.