nobody confident in their own abilities is panicking

https://www.theregister.com/2026/02/23/claude_code_security_panic/?td=rt-3a

the people who are panicking are signaling.

Infosec community panics as Anthropic rolls out Claude code security checker

ai-pocalypse: Not the first of its kind

The Register

moreover, nobody who has ever tried to use any llm to do code stuff for hours/days/weeks at a time is panicking either.

even people who are deep experts in what they do, who use llms to do stuff day to day, have to put a brick in a tube sock, put that in another tube sock, and swing it hard to bash the llm in the face over and over again to get it to behave and obey. and often that workout takes as much time as not using an llm.

everyone shitting their pants is signaling.

fucking good.
"security as we know it" is pay to play, zero boundaries, fraught with grifters, liars and cheats, shitloads of friendly-fire, people buying cert bootcamps to get people fake creditiblity, overdependence on shit like the cissp, people with zero computer experience directing whole armies of super technical folks

let it end.
it desperately needs a reboot.

if youve ever been burned because some asshole in HR shitcanned your resume because "you didnt go to the right college" or you couldnt score a gig because "you refused to get a cissp", or if youve ever ragequit a job because you were just "the token security person who was only there to fulfill a checkbox, and nobody listened to you and you felt like your job didnt matter" then you should want it to burn down too

@Viss

According to the AI developer, Claude Code Security is context-aware - as opposed to simply doing static code analysis. It "reads and reasons about your code the way a human security researcher would: understanding how components interact, tracing how data moves through your application, and catching complex vulnerabilities that rule-based tools miss," the company said.

It's really hard to tell the difference between a human and the masterful work of our friend Claude.

@jackryder @Viss I’m not even a coder and this sounds like malarkey to me. Claude Code may not allow buffer overflows or anything like that, but it will totally introduce subtler bugs and security issues without “knowing” that it does so.

@Viss

i've spent most of my career knowing i'd need to get my resume to the hiring manager and bypass HR if i wanted to have any chance of not getting "screened". most HR spend no more effort trying to understand position requirements than netflix spent on it's "recommended" algorithms.

LLM resume reviews will be that bad or worse. burn it all down.

@Viss When I got into security something like 15 years ago, it was so different. At that time, in the Mac community, I could make a difference, and do meaningful things. That’s so much harder to do now, with so many stupid, bureaucratic roadblocks, and I’m glad I’m looking at a career in the security industry in the rear view mirror.

@Viss

Don't hold back Viss. Tell us how you really feel. :-)

But seriously, to the point of the original article, yeah, no.

If I'm being very generous and allow that a "spicy linter" might be a halfway decent SAST (static application security testing) tool, that best case scenario would still be overwhelmed by the new and interesting security bugs introduced by their code generating brethren, "spicy autocomplete."

Full agree with Viss on the main point about folks with deep technical view.

@Viss Amen, brother. This is exactly why I left ITSec and went back to Ops on Big Iron.

@Viss normally - I'm all for a session of "Let's watch Kurtz & company squirm" because of their shitty past behaviour and righteous clusterfucks they've overseen at McAfee and Crowdstrike.

However - even as jaded as I am with regards to these folks - even I don't think that they earned or deserved that flak.

Just another indication that people who have no knowledge about the infosec or AI industry are investing billions of dollars into something they don't understand and are just as apt to yank funding because of panic.

And it's going to get worse as the AI bubble pops - and it will.

Sad part is - it was never the folks at the top who suffered when the dot com bubble or the sub prime mortgage bubble burst - all the execs seemed to have hand crafted artisanal golden parachutes ready made for when excrement met fan.

No - it's the little folks who ended up paying for it - in terms of their savings or 401K or what have you getting decimated as congress shuffled funds to subsidize these "too big to fail" entities that did just that.

And that's what will happen again with all this AI hype - albeit at a larger scale this time, and it's going to be the little folks footing the bill for the techbro enabled hubris that has brought us to the brink yet again.

@cjust with any luck enough of the system will break
@Viss "LLMs are fantastic for security and have a great opportunity to actually make a dent in the coming wave of software vulnerabilities[...]"

Yeah... No.
We will believe it when we see it happen.
@Viss Panic is too strong. I am concerned about the folks who don’t understand any of these security products, who have never tried to use a LLM to do anything beyond chat prompts, but nonetheless have some decision making power that can affect enterprise information security, listening to any of this stuff and making decisions.
@Viss Panic! At the Infosec?
@catsalad surely it would be (kernel)panic at the cisco :D
@Viss @catsalad y'all write jokes, not tragedies.
@Viss "Oh you know IOS? Which one–the fruit or the trash fire?"
@catsalad @Viss
🎵 I chime in with a "Haven't you people ever heard of commenting your goddamn code? No..."
@jackryder @catsalad dat fallout boy doe
@Viss @catsalad
Wrong Fallout but still kind of cool.
DJ Cummerbund - I Write Sugar Not Watermelons

YouTube
@catsalad @Viss what's my uptime again?

@Viss What I am not confident in is the ability of tech CEOs to prioritize delivering products that are not pure shit.

Delivering quality vs. delivering pure crap at a much lower cost?

@Viss The real victims here are the juniors and people recently entering a new field. LLMs teach you nothing (you have to do the learning yourself, like you always do), yet they give the illusion of productivity. The game is rigged so that junior devs are rewarded for pretending to gain understanding, when all they do is lean on the LLMs and hope they don’t fuck up.

@Viss Hey now, Claude found an SQL injection in my code and I like to think I have a pretty good practice of secure coding.

It thinks the statically typed i32 is an injection vulnerability and wants to fix it with more than a hundred lines of crud because it doesn’t understand how to make parameterized statements in my SQL library. It also made all of that crud public API in ways it could easily be called out of order and make new state issues. But that’s exactly the point.

@Viss getting supremely annoyed at that headline and how much bullshit it's carrying
@Viss on but I will panic, just not for the reasons they think.
@Viss the people who might be panicking for good reason are software maintainers at companies where thes agents are going to be given free range to fix usually inconsequential yellow flags by inserting reams of unreviewed code.
@kevingranade then the leadership of those companies are going to suffer quite largely when all their engieers quit, and their product catches fire, and all their customers leave.
@Viss yes it doesn't change the overall outcome, just an observation.
They can't catch fire and fall into the swamp fast enough.
@kevingranade it would very much be nicer if the folks making all these bad decisions actually felt some of the consequences, yeah, heh
@shafik there will be, at some point, enough people willing to deal with the pain of moving off ancient stuff like that. it may suck at first, but it will basically have to happen at some point because nobody is exactly teaching fortran and cobol these days, so soon as those engineers age out, the shit becomes egyptian heiroglyphs

@Viss

It is basically already like that, I think Vernor Vinge got it right. If we are still around ages from now it will be layers and layers of legacy code no one understands all the way down.

It is a very interesting thought process for someone who is in the depths of software development in big tech to really plan out what such a long term migration would look like just for one company. Once you get it, it is very humbling to realize how hard it really is.

@Viss

I am also read "Thinking in Systems" and it is a good book to read if you are thinking about this kind of stuff:

https://hachyderm.io/@shafik/116077397403274931

@Viss oh great, so this hyperactive, severely ADHD, junior intern who requires very detailed instructions to do anything useful and still promptly forgets their own name and what they were doing every 15 minutes is going to replace me?

I'm not panicking. I'm laughing. A lot.

@0xtero in the spirit of laughing a lot, i just spent like two hours swapping gpus with my desktop and gaming rig so that i can run ollama with some decent model, so that i can light up some incus containers and fuck around with weird agentic bullshit and fake mcp servers in order to do the research for the talk i submit to securityfest :D

soon you will be laughing at me too!

@Viss Yeah, as a security-minded devops engineer, this is dope. (Well, y'know, aside from all the general ethical/environmental/etc. concerns about LLM use.) Having more "eyes" out looking for security vulnerabilities is a good thing, and especially so when one set of "eyes" is biased in a different way than typical human reviewers and thus is well placed to notice some subset of problems that humans would probably miss.

Of course, that only applies as long as it's used sensibly. Which means using LLMs to report issues for human review and validation, not letting an agent loose on a code base with the ability to automatically file security reports for anything it finds. (I have little confidence that the tool will actually be used sensibly in most cases.)

@diazona you should be aware that i am actively working on research that intends to measure just how often llms lie about shit, even when using skills and mcp servers, because at the end of the day, no matter what layers you put on top of an llm, it still fucking lies and hallucinates - even when its told to use skills and mcp servers

so.. your sentiment, while optimistic, makes the assumption "that this shit works"

but .. it doesnt.
at least not with enough precision to be relied upon

@Viss Yeah, that was the whole point of my last paragraph
@diazona but even using llms to report issues for human review will be problematic as humans will end up chasing ghosts
@Viss Depends on how frequently the reports are legitimate and how much time the reviewers spend chasing ghost reports versus the benefit they gain from the legitimate ones. Different organizations/groups/developers will draw the line in different places. In some cases I could imagine if the LLM has a 1% hit rate that's good enough, whereas an individual developer or a team working on a low-impact project probably wouldn't bother until the rate gets much higher, if at all.
@diazona heh, imagine trying to propose a budget to finance by saying "99% of the time our analysts spend is complete bullshit, gimme more money"
Feature Request: Claude should know its runtime environment (Desktop App vs CLI vs Web) · Issue #28144 · anthropics/claude-code

Summary Claude Code does not know which interface it is running in. When asked, it cannot distinguish between the Desktop App, the CLI/Terminal, or the Web interface. This leads to incorrect assump...

GitHub
@Viss Not looking forward to someone running this, thinking everything is all kosher to load, and then taking down a quarter of the internet.
@catscatscats time to selfhost everything you possibly can :D
@Viss I think I would panic if this were my role - but mostly because of a general "AI" problem, which is that it eliminates tasks needed to give new people experience and ways to grow in to their role
@Viss (admittedly I'm also not at all confident in my ability, except for the brief moments I have to deal some of the stuff actual vendors ship to actual customers, but that's another story)
@Namnatulco assuming it works
@Viss even if it doesn't work, fixing is probably done by someone that would normally teach a new team member; but from what I've seen, "AI" has led to a hiring stop...

@Viss Having used some static code analyzers in the past, I have to honestly wonder if it can be worse than current ones.

The ones I've used were a festival of false positives to the point of being almost worthless.

(and I am not for using AI in any way...it's just they were that bad...)

@zombie042 well its a mixed bag. It is measurably useful and it does actually find stuff - but if you cannot tell yourself that what its showing you is bullshit, theres no way to tell the wheat from the chaff. so unless these things are being driven by people who can tell, shits gonna get ugly really fast