nobody confident in their own abilities is panicking
https://www.theregister.com/2026/02/23/claude_code_security_panic/?td=rt-3a
the people who are panicking are signaling.
nobody confident in their own abilities is panicking
https://www.theregister.com/2026/02/23/claude_code_security_panic/?td=rt-3a
the people who are panicking are signaling.
moreover, nobody who has ever tried to use any llm to do code stuff for hours/days/weeks at a time is panicking either.
even people who are deep experts in what they do, who use llms to do stuff day to day, have to put a brick in a tube sock, put that in another tube sock, and swing it hard to bash the llm in the face over and over again to get it to behave and obey. and often that workout takes as much time as not using an llm.
everyone shitting their pants is signaling.
fucking good.
"security as we know it" is pay to play, zero boundaries, fraught with grifters, liars and cheats, shitloads of friendly-fire, people buying cert bootcamps to get people fake creditiblity, overdependence on shit like the cissp, people with zero computer experience directing whole armies of super technical folks
let it end.
it desperately needs a reboot.
According to the AI developer, Claude Code Security is context-aware - as opposed to simply doing static code analysis. It "reads and reasons about your code the way a human security researcher would: understanding how components interact, tracing how data moves through your application, and catching complex vulnerabilities that rule-based tools miss," the company said.
It's really hard to tell the difference between a human and the masterful work of our friend Claude.
i've spent most of my career knowing i'd need to get my resume to the hiring manager and bypass HR if i wanted to have any chance of not getting "screened". most HR spend no more effort trying to understand position requirements than netflix spent on it's "recommended" algorithms.
LLM resume reviews will be that bad or worse. burn it all down.
Don't hold back Viss. Tell us how you really feel. :-)
But seriously, to the point of the original article, yeah, no.
If I'm being very generous and allow that a "spicy linter" might be a halfway decent SAST (static application security testing) tool, that best case scenario would still be overwhelmed by the new and interesting security bugs introduced by their code generating brethren, "spicy autocomplete."
Full agree with Viss on the main point about folks with deep technical view.
@Viss normally - I'm all for a session of "Let's watch Kurtz & company squirm" because of their shitty past behaviour and righteous clusterfucks they've overseen at McAfee and Crowdstrike.
However - even as jaded as I am with regards to these folks - even I don't think that they earned or deserved that flak.
Just another indication that people who have no knowledge about the infosec or AI industry are investing billions of dollars into something they don't understand and are just as apt to yank funding because of panic.
And it's going to get worse as the AI bubble pops - and it will.
Sad part is - it was never the folks at the top who suffered when the dot com bubble or the sub prime mortgage bubble burst - all the execs seemed to have hand crafted artisanal golden parachutes ready made for when excrement met fan.
No - it's the little folks who ended up paying for it - in terms of their savings or 401K or what have you getting decimated as congress shuffled funds to subsidize these "too big to fail" entities that did just that.
And that's what will happen again with all this AI hype - albeit at a larger scale this time, and it's going to be the little folks footing the bill for the techbro enabled hubris that has brought us to the brink yet again.

@Viss What I am not confident in is the ability of tech CEOs to prioritize delivering products that are not pure shit.
Delivering quality vs. delivering pure crap at a much lower cost?
@Viss Hey now, Claude found an SQL injection in my code and I like to think I have a pretty good practice of secure coding.
It thinks the statically typed i32 is an injection vulnerability and wants to fix it with more than a hundred lines of crud because it doesn’t understand how to make parameterized statements in my SQL library. It also made all of that crud public API in ways it could easily be called out of order and make new state issues. But that’s exactly the point.
It is basically already like that, I think Vernor Vinge got it right. If we are still around ages from now it will be layers and layers of legacy code no one understands all the way down.
It is a very interesting thought process for someone who is in the depths of software development in big tech to really plan out what such a long term migration would look like just for one company. Once you get it, it is very humbling to realize how hard it really is.
I am also read "Thinking in Systems" and it is a good book to read if you are thinking about this kind of stuff:
@Viss oh great, so this hyperactive, severely ADHD, junior intern who requires very detailed instructions to do anything useful and still promptly forgets their own name and what they were doing every 15 minutes is going to replace me?
I'm not panicking. I'm laughing. A lot.
@0xtero in the spirit of laughing a lot, i just spent like two hours swapping gpus with my desktop and gaming rig so that i can run ollama with some decent model, so that i can light up some incus containers and fuck around with weird agentic bullshit and fake mcp servers in order to do the research for the talk i submit to securityfest :D
soon you will be laughing at me too!
@Viss Yeah, as a security-minded devops engineer, this is dope. (Well, y'know, aside from all the general ethical/environmental/etc. concerns about LLM use.) Having more "eyes" out looking for security vulnerabilities is a good thing, and especially so when one set of "eyes" is biased in a different way than typical human reviewers and thus is well placed to notice some subset of problems that humans would probably miss.
Of course, that only applies as long as it's used sensibly. Which means using LLMs to report issues for human review and validation, not letting an agent loose on a code base with the ability to automatically file security reports for anything it finds. (I have little confidence that the tool will actually be used sensibly in most cases.)
@diazona you should be aware that i am actively working on research that intends to measure just how often llms lie about shit, even when using skills and mcp servers, because at the end of the day, no matter what layers you put on top of an llm, it still fucking lies and hallucinates - even when its told to use skills and mcp servers
so.. your sentiment, while optimistic, makes the assumption "that this shit works"
but .. it doesnt.
at least not with enough precision to be relied upon

Summary Claude Code does not know which interface it is running in. When asked, it cannot distinguish between the Desktop App, the CLI/Terminal, or the Web interface. This leads to incorrect assump...
@Viss Having used some static code analyzers in the past, I have to honestly wonder if it can be worse than current ones.
The ones I've used were a festival of false positives to the point of being almost worthless.
(and I am not for using AI in any way...it's just they were that bad...)