nobody confident in their own abilities is panicking
https://www.theregister.com/2026/02/23/claude_code_security_panic/?td=rt-3a
the people who are panicking are signaling.
nobody confident in their own abilities is panicking
https://www.theregister.com/2026/02/23/claude_code_security_panic/?td=rt-3a
the people who are panicking are signaling.
moreover, nobody who has ever tried to use any llm to do code stuff for hours/days/weeks at a time is panicking either.
even people who are deep experts in what they do, who use llms to do stuff day to day, have to put a brick in a tube sock, put that in another tube sock, and swing it hard to bash the llm in the face over and over again to get it to behave and obey. and often that workout takes as much time as not using an llm.
everyone shitting their pants is signaling.
fucking good.
"security as we know it" is pay to play, zero boundaries, fraught with grifters, liars and cheats, shitloads of friendly-fire, people buying cert bootcamps to get people fake creditiblity, overdependence on shit like the cissp, people with zero computer experience directing whole armies of super technical folks
let it end.
it desperately needs a reboot.
According to the AI developer, Claude Code Security is context-aware - as opposed to simply doing static code analysis. It "reads and reasons about your code the way a human security researcher would: understanding how components interact, tracing how data moves through your application, and catching complex vulnerabilities that rule-based tools miss," the company said.
It's really hard to tell the difference between a human and the masterful work of our friend Claude.
i've spent most of my career knowing i'd need to get my resume to the hiring manager and bypass HR if i wanted to have any chance of not getting "screened". most HR spend no more effort trying to understand position requirements than netflix spent on it's "recommended" algorithms.
LLM resume reviews will be that bad or worse. burn it all down.
Don't hold back Viss. Tell us how you really feel. :-)
But seriously, to the point of the original article, yeah, no.
If I'm being very generous and allow that a "spicy linter" might be a halfway decent SAST (static application security testing) tool, that best case scenario would still be overwhelmed by the new and interesting security bugs introduced by their code generating brethren, "spicy autocomplete."
Full agree with Viss on the main point about folks with deep technical view.
@Viss normally - I'm all for a session of "Let's watch Kurtz & company squirm" because of their shitty past behaviour and righteous clusterfucks they've overseen at McAfee and Crowdstrike.
However - even as jaded as I am with regards to these folks - even I don't think that they earned or deserved that flak.
Just another indication that people who have no knowledge about the infosec or AI industry are investing billions of dollars into something they don't understand and are just as apt to yank funding because of panic.
And it's going to get worse as the AI bubble pops - and it will.
Sad part is - it was never the folks at the top who suffered when the dot com bubble or the sub prime mortgage bubble burst - all the execs seemed to have hand crafted artisanal golden parachutes ready made for when excrement met fan.
No - it's the little folks who ended up paying for it - in terms of their savings or 401K or what have you getting decimated as congress shuffled funds to subsidize these "too big to fail" entities that did just that.
And that's what will happen again with all this AI hype - albeit at a larger scale this time, and it's going to be the little folks footing the bill for the techbro enabled hubris that has brought us to the brink yet again.