Nobody likely wants to hear this, but if you plan to get into or stay in IT, software development, systems administration, or any discipline within "cybersecurityโ€, you *ABSOLUTELY* need to make or take time this year to identify what you can do that AI cannot do, and create some of those items if your list is short or empty. The weavers in the 1800s used violence to get a 20-year pseudo-reprieve before they were pushed into obsolescence. (1/19)

We've (this includes me) got ~maybe 18mos.

I'm as pushback-on-this-โ€œAIโ€-thing as makes sense/is possible. Iโ€™d like for the bubble to burst. Even if it does, the rulers of our clicktatorship will just fuel a quick rebuild. (2/19)

In the past ~4 weeks I have personally observed some irrefutable things in "AI" that are very likely going to cause massive shocks to the employment models in the aforementioned sectors. I know some have already seen minor shocks. They are nothing compared to what's highly probably ahead.

In my (broad) field, I think that there are some things that make humans 110% necessary: (3/19)

First is โ€œJudgment under uncertainty with real consequences.โ€ These new "AI" systems can use tools to analyze a gazillion sessions and cluster payloads, but they do not (or absolutely should not) bear responsibility for the "we're pulling the plug on production" decision at 3am. This โ€œweight of consequenceโ€ shapes human expertise in ways that inform intuition, risk tolerance, and the ability to act decisively with incomplete information. (4/19)
Organizations will continue needing people who can own outcomes, not just produce analysis. (5/19)
Another is โ€œAdversarial creativity and novel problem framing.โ€ The more recent โ€œAIโ€ systems are actually darn good at using tools to do pattern matching against known patterns and recombining existing approaches. They absolutely suck at the โ€œgenuinely novelโ€; y'know, the attack vector nobody has documented, the defensive technique that requires understanding how a specific organization actually operates versus how it should operate. (6/19)
The best security practitioners think like attackers in ways that go beyond "here are common TTPs." (7/19)
A yuge one is โ€œInstitutional knowledge and relationship capital.โ€ Understanding that the finance team always ignores security warnings โ€” especially Dave โ€” during quarter-close, that the legacy SCADA system can't be patched because the vendor went bankrupt in 2019, that the CISO and CTO have a long-running disagreement about cloud migration are just a few examples of where context shapes what recommendations are actually actionable. (8/19)

Many technically correct analyses are organizationally useless.

The biggest one? โ€œThe ability to build and maintain trust.โ€ When a breach happens, executives don't want a report from an โ€œAIโ€. They want someone who can look them in the eye, explain what happened, and take ownership of the path forward. The human element of security leadership is absolutely not going away. (9/19)

It'd be great if folks in very subdomain-specific parts of cyber would provide similar lists. I try to stay in my lane.

So, what are some of these โ€œvery human-only thingsโ€?

Develop depth in areas that require your presence (physical or virtual) or legal accountability. Disciplines such as incident response, compliance attestation, or security architecture for air-gapped or classified environments. These have regulatory and practical barriers to full automation. (10/19)

Build expertise in the seams between systems. Understanding how a given combination of legacy mainframe, cloud services, and OT environment actually interconnects requires the kind of institutional archaeology (or the powers of a sexton) that doesn't exist in training data. (11/19)
I know this will get me tapping mute or block alot, but I'm fairly certain you're going to need to get comfortable being the human in the loop for โ€œAIโ€-augmented workflows. The analyst who can effectively direct tools, ****validate outputs**** (b/c these things will always make ๐Ÿ’ฉ up), and translate findings for different audiences has a different job than before but still a necessary one. (12/19)
**Learn to ask better questions.** Bring your hypotheses, domain expertise, and knowing which threads are worth pulling to the table. That editorial judgment about what matters is undervalued, and is going to take a while to infuse into "AIโ€ systems. (13/19)
The very uncomfortable truth: there will be fewer entry-level positions that consist primarily of "look at alerts and escalate." That pipeline into the field is narrowing at a frightening pace. (14/19)
Again, this is gonna lose me follows (aw) and cause me to mute/block alot of replies, but the folks who thrive will be those who can figure out what "AIโ€ capabilities aren't complete garbage and wield them with uniquely human judgment rather than competing on tasks where โ€œAIโ€ has clear advantages. A year ago, even with long covid brain fox, I could out-"John Henry" all of the commercial models at programming, cyber, and writing tasks (both in speed and quality). (15/19)
Now, with the fog gone, I'm likely ~3-months away of being slower than โ€œAI" on a substantial number of core tasks that it can absolutely do. I've seen it. I've validated the outputs. It sucks. It really really sucks. And, it's not because I'm feeble or have some other undisclosed brain condition (unlike 47). These systems are being curated to do exactly that: erase all of us John Henrys (ref for those unawares of the lore: https://en.wikipedia.org/wiki/John_Henry_(folklore)) (16/19)
John Henry (folklore) - Wikipedia

My formal essay on this a cpl weeks ago said the following in way too many pages: (17/19)
What concerns me most isn't the senior practitioners; I'll/they'll/you'll adapt and likely become that much more effective. It's the junior folks who won't get the years of pattern exposure that built our intuition in the first place. That's a pipeline problem the industry hasn't seriously grappled with yet and isn't likely to b/c of the hot, thin air in the offices and board rooms of myopic and greedy senior executives. (18/19)

NOTE: I will mute or block all caustic replies from "AI Vegansโ€. You do your religion the way you want to. I'm trying to practically help folks.

Adding in enough hashtags **solely** to help the folks blocking "AI" content.

#AI #ArtificialIntelligence #LLM #ChatGPT #Claude #Gemini #Anthropic #OpenAI #Google #Microsoft #agentic #MCP #agent (19/19)

@hrbrmstr

A relative works in advertising/graphic design. The majority of their work is โ€œmake 15 copies of this photo of a desk and put one of our 15 different laptop models into each photoโ€ andโ€ฆ

Modern generative AI can do that without any further work. Sure the pictures look fake, but they were fake before too and what took a human an hour in Photoshop takes ChatGPT seconds.

AI isnโ€™t going to replace all humans, but it doesnโ€™t need to to devastate entire sectors.

@hrbrmstr

And we in the tech space/knowledge economy are unprepared.

Can Claude write a high speed network traffic capture engine that parses some obscure protocol only known by ten people? No, but how often do you need that?

The majority of coding, just like the majority of anything, is simple stuff for small projects. Claude absolutely can make a โ€œgood enoughโ€ webpage for a car dealership. Is it going to be buggy? Sure, but so was the page from Guy in His Pajamas Consulting, Inc.

@rk oh. umโ€ฆ i did not know that company name was takenโ€ฆ

**furiously does search & replace in his LLC incorporation draft**

AI Fails at Most Remote Work, Researchers Find - Slashdot

A new study "compared how well top AI systems and human workers did at hundreds of real work assignments," reports the Washington Post. They add that at least one example "illustrates a disconnect three years after the release of ChatGPT that has implications for the whole economy." AI can acco...

@Viss Aye (boy howdy has that made the rounds) but I didn't say "automate jobs away". And, there's no way I'd give any model a visual task to complete (in the setup preferences in my Claude I have a note saying "never ever make a chart").

There are some design flaws with the study too.

@hrbrmstr
"AI Vegans" ๐Ÿ˜‚

@hrbrmstr thanks for this Bob. We were actually discussing LLM capabilities in our research lab Friday, and how others are using it.

I try to use it as little as possible, as I know my domain well, and it's 99% writing bioinfo analysis code, my muscle memory is good, and knowing where and how to look for other code has been pretty freaking good.

But boss is encouraging me to find areas where I can use it, partly because if I don't, I might find I don't have a job, or am moving too slow. /1

@hrbrmstr and not in a threatening way, mind you, more like a "this is where things are going, be prepared, and figure out ways it can help you". /2

@hrbrmstr the one that really worries me, is they are having students use it when learning to code in python. Boss basically has a set of exercises they go through, with regular code critiques from them, and is encouraging students to use LLMs to help generate code.

And I worry how much ability of students will be lost because of how much "struggle" will be removed from the coding. The struggle to parse docs, see examples, and make them work for their own situation. /3

@hrbrmstr but maybe it's really just one step removed from SO, with tons more patterns to benefit from.
@rmflight Wow, so (IMO) nobody should be teaching students how to use LLMs to generate code before they teach students the fundamentals of programming and at least one programming language and the student writes at least a few projects on their own. I'm fine with letting LLMs review said code and providing feedback (i think that's a great use of them tbh). Iโ€™m also OK with LLM-enabled IDEs being allowed to do limited code completion for new students. (1/3)
@rmflight Up until November I would not let Claude do anything with R code (I still use R quite a bit despite not being thrilled with the state of R land) but that changed with v4.5 of the models. (1/3)
I'm pretty similar b/c I, too, know my domain well and also have solid muscle memory, and keep abreast of the latest bits, and have Kagi for amazing code/etc al. discovery. (2/3)

But, I have been increasingly creating workflows and what Claude calls "skills" to offload some analysis tasks b/c it 100% can do them (the output still needs me to read through it b/c I won't be trusting LLMs to be autonomous any time soon) and I have more work than time and we're not rly in a position to spend $ @ work.

I never would have done that a year ago or even six months ago. (3/3)