Nobody likely wants to hear this, but if you plan to get into or stay in IT, software development, systems administration, or any discipline within "cybersecurity”, you *ABSOLUTELY* need to make or take time this year to identify what you can do that AI cannot do, and create some of those items if your list is short or empty. The weavers in the 1800s used violence to get a 20-year pseudo-reprieve before they were pushed into obsolescence. (1/19)

We've (this includes me) got ~maybe 18mos.

I'm as pushback-on-this-“AI”-thing as makes sense/is possible. I’d like for the bubble to burst. Even if it does, the rulers of our clicktatorship will just fuel a quick rebuild. (2/19)

In the past ~4 weeks I have personally observed some irrefutable things in "AI" that are very likely going to cause massive shocks to the employment models in the aforementioned sectors. I know some have already seen minor shocks. They are nothing compared to what's highly probably ahead.

In my (broad) field, I think that there are some things that make humans 110% necessary: (3/19)

First is “Judgment under uncertainty with real consequences.” These new "AI" systems can use tools to analyze a gazillion sessions and cluster payloads, but they do not (or absolutely should not) bear responsibility for the "we're pulling the plug on production" decision at 3am. This “weight of consequence” shapes human expertise in ways that inform intuition, risk tolerance, and the ability to act decisively with incomplete information. (4/19)
Organizations will continue needing people who can own outcomes, not just produce analysis. (5/19)
Another is “Adversarial creativity and novel problem framing.” The more recent “AI” systems are actually darn good at using tools to do pattern matching against known patterns and recombining existing approaches. They absolutely suck at the “genuinely novel”; y'know, the attack vector nobody has documented, the defensive technique that requires understanding how a specific organization actually operates versus how it should operate. (6/19)
The best security practitioners think like attackers in ways that go beyond "here are common TTPs." (7/19)
A yuge one is “Institutional knowledge and relationship capital.” Understanding that the finance team always ignores security warnings — especially Dave — during quarter-close, that the legacy SCADA system can't be patched because the vendor went bankrupt in 2019, that the CISO and CTO have a long-running disagreement about cloud migration are just a few examples of where context shapes what recommendations are actually actionable. (8/19)

Many technically correct analyses are organizationally useless.

The biggest one? “The ability to build and maintain trust.” When a breach happens, executives don't want a report from an “AI”. They want someone who can look them in the eye, explain what happened, and take ownership of the path forward. The human element of security leadership is absolutely not going away. (9/19)

It'd be great if folks in very subdomain-specific parts of cyber would provide similar lists. I try to stay in my lane.

So, what are some of these “very human-only things”?

Develop depth in areas that require your presence (physical or virtual) or legal accountability. Disciplines such as incident response, compliance attestation, or security architecture for air-gapped or classified environments. These have regulatory and practical barriers to full automation. (10/19)

Build expertise in the seams between systems. Understanding how a given combination of legacy mainframe, cloud services, and OT environment actually interconnects requires the kind of institutional archaeology (or the powers of a sexton) that doesn't exist in training data. (11/19)
I know this will get me tapping mute or block alot, but I'm fairly certain you're going to need to get comfortable being the human in the loop for “AI”-augmented workflows. The analyst who can effectively direct tools, ****validate outputs**** (b/c these things will always make 💩 up), and translate findings for different audiences has a different job than before but still a necessary one. (12/19)
**Learn to ask better questions.** Bring your hypotheses, domain expertise, and knowing which threads are worth pulling to the table. That editorial judgment about what matters is undervalued, and is going to take a while to infuse into "AI” systems. (13/19)
The very uncomfortable truth: there will be fewer entry-level positions that consist primarily of "look at alerts and escalate." That pipeline into the field is narrowing at a frightening pace. (14/19)
Again, this is gonna lose me follows (aw) and cause me to mute/block alot of replies, but the folks who thrive will be those who can figure out what "AI” capabilities aren't complete garbage and wield them with uniquely human judgment rather than competing on tasks where “AI” has clear advantages. A year ago, even with long covid brain fox, I could out-"John Henry" all of the commercial models at programming, cyber, and writing tasks (both in speed and quality). (15/19)
Now, with the fog gone, I'm likely ~3-months away of being slower than “AI" on a substantial number of core tasks that it can absolutely do. I've seen it. I've validated the outputs. It sucks. It really really sucks. And, it's not because I'm feeble or have some other undisclosed brain condition (unlike 47). These systems are being curated to do exactly that: erase all of us John Henrys (ref for those unawares of the lore: https://en.wikipedia.org/wiki/John_Henry_(folklore)) (16/19)
John Henry (folklore) - Wikipedia

My formal essay on this a cpl weeks ago said the following in way too many pages: (17/19)
What concerns me most isn't the senior practitioners; I'll/they'll/you'll adapt and likely become that much more effective. It's the junior folks who won't get the years of pattern exposure that built our intuition in the first place. That's a pipeline problem the industry hasn't seriously grappled with yet and isn't likely to b/c of the hot, thin air in the offices and board rooms of myopic and greedy senior executives. (18/19)

NOTE: I will mute or block all caustic replies from "AI Vegans”. You do your religion the way you want to. I'm trying to practically help folks.

Adding in enough hashtags **solely** to help the folks blocking "AI" content.

#AI #ArtificialIntelligence #LLM #ChatGPT #Claude #Gemini #Anthropic #OpenAI #Google #Microsoft #agentic #MCP #agent (19/19)

@hrbrmstr thanks for this Bob. We were actually discussing LLM capabilities in our research lab Friday, and how others are using it.

I try to use it as little as possible, as I know my domain well, and it's 99% writing bioinfo analysis code, my muscle memory is good, and knowing where and how to look for other code has been pretty freaking good.

But boss is encouraging me to find areas where I can use it, partly because if I don't, I might find I don't have a job, or am moving too slow. /1

@hrbrmstr and not in a threatening way, mind you, more like a "this is where things are going, be prepared, and figure out ways it can help you". /2

@hrbrmstr the one that really worries me, is they are having students use it when learning to code in python. Boss basically has a set of exercises they go through, with regular code critiques from them, and is encouraging students to use LLMs to help generate code.

And I worry how much ability of students will be lost because of how much "struggle" will be removed from the coding. The struggle to parse docs, see examples, and make them work for their own situation. /3

@rmflight Wow, so (IMO) nobody should be teaching students how to use LLMs to generate code before they teach students the fundamentals of programming and at least one programming language and the student writes at least a few projects on their own. I'm fine with letting LLMs review said code and providing feedback (i think that's a great use of them tbh). I’m also OK with LLM-enabled IDEs being allowed to do limited code completion for new students. (1/3)

@hrbrmstr I'm not privy to what exactly they are doing with LLM, but I do know they are emphasizing how the models get things wrong.

I will probably ask how learning to code with the LLM in the loop fits with Blooms taxonomy of learning for code, something they emphasize a lot, and have done research into. Because it feels like it subverts parts of learning.

@rmflight @hrbrmstr I’ve splurged a bit of an essay here, fair warning. I’m not an AI vegan 😭 so please don’t block. It’s just a scary environment for someone like me, who isn’t a “Senior Anything”, and has been told to use LLMs at work even though I didn’t really want to…

In response to the thread: I think of it as the LLM catch-22.

As Bob says, it can make senior devs/programmers/specialists more efficient, because they usually have literally decades of experience, meaning fantastic intuition, really solid skills, and a *broad* set of skills too. I think the idea that those seniors can confidently, and effectively, use and direct LLMs, is plausible, though I personally know a couple who haven’t found that to be the case.

But. It seems many of us broadly agree that LLMs will negatively impact learners who are meant to be acquiring I guess what I’d call “domain wisdom”. The earlier in that journey someone is, the less they should be using LLMs to do their work, because doing the work is the thinking. If you aren’t thinking, good luck learning anything. Domain wisdom requires domain knowledge, so I expect these people to be acquiring very little of either.

So, how do you get from being a learner (not meant to be using LLMs for the thinking) to a senior (where it’s ok to use them)? If you’re not already a senior, that’s the LLM catch-22. “My manager says I have to use LLMs if I ever want to be a senior, but I don’t know enough to be a senior because I’ve been using LLMs so much and for so long.”

In my opinion, senior teams and possibly even the government have to address this MEANINGFULLY. If memory serves, that’s what your recommendation was in your report, Bob. Employees like me will do whatever we have to, to survive, because we need the wage. Students will likewise use LLMs if it’s the path of least resistance, because let’s face it, many of them aren’t there to learn, and even if they are may sometimes feel pressured to take an easier route. (1/2)

Execs and legislators need to put the right incentives in place so that learning is required, protected, valued… which they won’t (surprise!). I wonder if we might see more serious/meaningful accreditation for e.g. software development, beyond the current taught university degrees and borderline-predatory boot camps. Or in 5 years will managers be scratching their heads, wondering why the abilities of their senior devs seem so much worse than 10 years ago - because we’ve ALL been deskilled so badly by not practicing the fundamentals of our craft as part of our day-to-day duties..?

IDK but yeah it feels pretty fucking dreadful to be in the industry at the moment. This is one reason why I’ve started to focus on learning a broader set of skills and practical ML, because I don’t see AI being able to replace a knowledgeable data scientist with serviceable DevOps skills. There’s a lot of business context involved, so realistically (or, hopefully) I believe the human is still very important. (2/2)

@jimgar this year they will be busy sending slop our way when the discover cli coding tools like claude code. The hope should be they learn the hard way the limits of the mistaken idea of "vibecoding"
@bioinfhotep Students, you mean?
@jimgar C-suites that would like to think of themselves as technical
@bioinfhotep Oh, for sure. There is an argument that can be somewhat of a good though. Not the slop, but the ideation. Because if they can show you a prototype, it might them closer to showing you what they want. Of course it might also make them think your job is easy or whatever but I don’t think it’s inherently a bad idea
@jimgar it is definitely going to increase ridiculous productivity expectations and will definitely increase integration problem with legacy systems. These things are fine if you don't have any legacy code to deal with

@bioinfhotep @jimgar

If you want to see example to that listen to the last changelog with the ex gitlab ceo: "in 2026 you will "manage" 6-7 agents". Podcast host: "but I can barely manage one".

AI to be financially viable will at least require that from us (spending 10k/month in tokens) :)

@defuneste @jimgar yeah the token economics are not sound, hence the recent drama between anthropic and other coding "agent" tools like openCode. Claude Code is only affordable with their subsidized "Max" plan that the overpriced API users are indirectly paying for
@defuneste @jimgar there is going to be a big price correction or service degradation (they quantize without notice the gigantic models that are used from time to time for propaganda/hype) unless some major algorithmic or hardware improvement
@bioinfhotep But we were innovating so much right now, we were that close to solve cancer and soon we will all have minimal basic income. 3years of LLM and so many tangible progress 😉

@defuneste let's see if llms can overcome basic physical contrains on computation

And one obvious issue explaining why there is not much gainfull use of these systems even within software related fields is that coming up with usefull questions, combinations or applications will always be the bottleneck

This is why all the hype is around coding up stuff that were the target of SaaS services I.e CRUD related work in the end

@jimgar this was such a well-crafted reply! (1/6)
aye. in the blather-form "MagentAI”, the wish/hope/prayer is that we get that meaningful intervention from gov or some set of billionaires who can stand up a foundation with access in all 50 states (+territories) and online. given how we've failed every in every generation to intervene when "innovation" destroys entire industries, and given how likely we're stuck with some really bad and incompetent leadership for ~20 years, I'm not exactly hopeful… (2/6)

Ima cc: @marybethR b/c I think your thread has some nuggets she might want to use in the AI literacy course she developed and is teaching at UNE in a cpl weeks.

I'm going to try to do my part and start to at least help junior folks in cyber. While I “technically" did this for my CMU CISO class — https://codeberg.org/hrbrmstr/cmu-ciso-dds-ddi — I just used the opportunity of getting paid to do work to create something that will help everyone who needs to learn or care about threat intel. (3/6)

cmu-ciso-dds-ddi

Data-Driven Threat Intelligence Resources

Codeberg.org
I made these two things https://codeberg.org/hrbrmstr/ja4-mcp
https://codeberg.org/hrbrmstr/go-roast to help me but also others to get up to speed with some things and also use them with LLMs to speedup learning about technical details of those two indicators. (4/6)
ja4-mcp

JA4 MCP Server

Codeberg.org
I've been reticent to talk more publicly about all the ways I use this tech b/c of the vegans with pitchforks (and b/c, frankly, things still rly sucked till all the major model updates this past November). I'm pondering the best way to move forward but it's likely going to be an cyber-focused AI website with a blog, e-books, and a companion repo. (5/6)
I am desperately concerned orgs are just going to abandon non-senior folks and whatever I can do to help folks get up to speed fast enough to stay above the cutoff line, I'm def going to do. (6/6)

@hrbrmstr It’s good of you to do so. Appreciate that mastodon is a particularly bad place for vocal anti-AI sentiment, so it’s understandable you don’t talk about it more.

On that front, and stay with me(!), I’m pretty anti-LLM in a broad sense. I don’t believe that any technology is “just a tool”, so all the IP theft, exploited labour, de-skilling, environmental impact etc. plays a role in that position. At the same time, I’ve seen first hand where it can catch things we easily miss - like you say I think it can be good for code reviews - and so long as there is code generated for reproducibility I also fully expect there to be benefits for tasks like exploratory data analysis.

I share that because I think you already understand (and share) much of the negative sentiment it can draw out. You’ve probably had to deal with a bunch of rage bait responses, and I get it, that’s super tiresome, but there are also people like me on here who may be a bit cranky but do accept there’s a huge amount of value in hearing from someone like yourself. Someone very experienced who tries to use tools in a decent way. It can set a good example and bring some sanity and practicality to the hype.

@jimgar i will 100% be at the base of the pyre hurling a well-lit torch at the pile of dry wood if/when we get to burn this whole thing down :-)