We've (this includes me) got ~maybe 18mos.
I'm as pushback-on-this-โAIโ-thing as makes sense/is possible. Iโd like for the bubble to burst. Even if it does, the rulers of our clicktatorship will just fuel a quick rebuild. (2/19)
In the past ~4 weeks I have personally observed some irrefutable things in "AI" that are very likely going to cause massive shocks to the employment models in the aforementioned sectors. I know some have already seen minor shocks. They are nothing compared to what's highly probably ahead.
In my (broad) field, I think that there are some things that make humans 110% necessary: (3/19)
Many technically correct analyses are organizationally useless.
The biggest one? โThe ability to build and maintain trust.โ When a breach happens, executives don't want a report from an โAIโ. They want someone who can look them in the eye, explain what happened, and take ownership of the path forward. The human element of security leadership is absolutely not going away. (9/19)
It'd be great if folks in very subdomain-specific parts of cyber would provide similar lists. I try to stay in my lane.
So, what are some of these โvery human-only thingsโ?
Develop depth in areas that require your presence (physical or virtual) or legal accountability. Disciplines such as incident response, compliance attestation, or security architecture for air-gapped or classified environments. These have regulatory and practical barriers to full automation. (10/19)
NOTE: I will mute or block all caustic replies from "AI Vegansโ. You do your religion the way you want to. I'm trying to practically help folks.
Adding in enough hashtags **solely** to help the folks blocking "AI" content.
#AI #ArtificialIntelligence #LLM #ChatGPT #Claude #Gemini #Anthropic #OpenAI #Google #Microsoft #agentic #MCP #agent (19/19)
A relative works in advertising/graphic design. The majority of their work is โmake 15 copies of this photo of a desk and put one of our 15 different laptop models into each photoโ andโฆ
Modern generative AI can do that without any further work. Sure the pictures look fake, but they were fake before too and what took a human an hour in Photoshop takes ChatGPT seconds.
AI isnโt going to replace all humans, but it doesnโt need to to devastate entire sectors.
And we in the tech space/knowledge economy are unprepared.
Can Claude write a high speed network traffic capture engine that parses some obscure protocol only known by ten people? No, but how often do you need that?
The majority of coding, just like the majority of anything, is simple stuff for small projects. Claude absolutely can make a โgood enoughโ webpage for a car dealership. Is it going to be buggy? Sure, but so was the page from Guy in His Pajamas Consulting, Inc.
@rk oh. umโฆ i did not know that company name was takenโฆ
**furiously does search & replace in his LLC incorporation draft**
@hrbrmstr this just got posted to slashdot
https://it.slashdot.org/story/26/01/10/1926209/ai-fails-at-most-remote-work-researchers-find

A new study "compared how well top AI systems and human workers did at hundreds of real work assignments," reports the Washington Post. They add that at least one example "illustrates a disconnect three years after the release of ChatGPT that has implications for the whole economy." AI can acco...
@Viss Aye (boy howdy has that made the rounds) but I didn't say "automate jobs away". And, there's no way I'd give any model a visual task to complete (in the setup preferences in my Claude I have a note saying "never ever make a chart").
There are some design flaws with the study too.
@hrbrmstr thanks for this Bob. We were actually discussing LLM capabilities in our research lab Friday, and how others are using it.
I try to use it as little as possible, as I know my domain well, and it's 99% writing bioinfo analysis code, my muscle memory is good, and knowing where and how to look for other code has been pretty freaking good.
But boss is encouraging me to find areas where I can use it, partly because if I don't, I might find I don't have a job, or am moving too slow. /1
@hrbrmstr the one that really worries me, is they are having students use it when learning to code in python. Boss basically has a set of exercises they go through, with regular code critiques from them, and is encouraging students to use LLMs to help generate code.
And I worry how much ability of students will be lost because of how much "struggle" will be removed from the coding. The struggle to parse docs, see examples, and make them work for their own situation. /3
@hrbrmstr I'm not privy to what exactly they are doing with LLM, but I do know they are emphasizing how the models get things wrong.
I will probably ask how learning to code with the LLM in the loop fits with Blooms taxonomy of learning for code, something they emphasize a lot, and have done research into. Because it feels like it subverts parts of learning.
@rmflight @hrbrmstr Iโve splurged a bit of an essay here, fair warning. Iโm not an AI vegan ๐ญ so please donโt block. Itโs just a scary environment for someone like me, who isnโt a โSenior Anythingโ, and has been told to use LLMs at work even though I didnโt really want toโฆ
In response to the thread: I think of it as the LLM catch-22.
As Bob says, it can make senior devs/programmers/specialists more efficient, because they usually have literally decades of experience, meaning fantastic intuition, really solid skills, and a *broad* set of skills too. I think the idea that those seniors can confidently, and effectively, use and direct LLMs, is plausible, though I personally know a couple who havenโt found that to be the case.
But. It seems many of us broadly agree that LLMs will negatively impact learners who are meant to be acquiring I guess what Iโd call โdomain wisdomโ. The earlier in that journey someone is, the less they should be using LLMs to do their work, because doing the work is the thinking. If you arenโt thinking, good luck learning anything. Domain wisdom requires domain knowledge, so I expect these people to be acquiring very little of either.
So, how do you get from being a learner (not meant to be using LLMs for the thinking) to a senior (where itโs ok to use them)? If youโre not already a senior, thatโs the LLM catch-22. โMy manager says I have to use LLMs if I ever want to be a senior, but I donโt know enough to be a senior because Iโve been using LLMs so much and for so long.โ
In my opinion, senior teams and possibly even the government have to address this MEANINGFULLY. If memory serves, thatโs what your recommendation was in your report, Bob. Employees like me will do whatever we have to, to survive, because we need the wage. Students will likewise use LLMs if itโs the path of least resistance, because letโs face it, many of them arenโt there to learn, and even if they are may sometimes feel pressured to take an easier route. (1/2)
Execs and legislators need to put the right incentives in place so that learning is required, protected, valuedโฆ which they wonโt (surprise!). I wonder if we might see more serious/meaningful accreditation for e.g. software development, beyond the current taught university degrees and borderline-predatory boot camps. Or in 5 years will managers be scratching their heads, wondering why the abilities of their senior devs seem so much worse than 10 years ago - because weโve ALL been deskilled so badly by not practicing the fundamentals of our craft as part of our day-to-day duties..?
IDK but yeah it feels pretty fucking dreadful to be in the industry at the moment. This is one reason why Iโve started to focus on learning a broader set of skills and practical ML, because I donโt see AI being able to replace a knowledgeable data scientist with serviceable DevOps skills. Thereโs a lot of business context involved, so realistically (or, hopefully) I believe the human is still very important. (2/2)
If you want to see example to that listen to the last changelog with the ex gitlab ceo: "in 2026 you will "manage" 6-7 agents". Podcast host: "but I can barely manage one".
AI to be financially viable will at least require that from us (spending 10k/month in tokens) :)
@defuneste let's see if llms can overcome basic physical contrains on computation
And one obvious issue explaining why there is not much gainfull use of these systems even within software related fields is that coming up with usefull questions, combinations or applications will always be the bottleneck
This is why all the hype is around coding up stuff that were the target of SaaS services I.e CRUD related work in the end
Ima cc: @marybethR b/c I think your thread has some nuggets she might want to use in the AI literacy course she developed and is teaching at UNE in a cpl weeks.
I'm going to try to do my part and start to at least help junior folks in cyber. While I โtechnically" did this for my CMU CISO class โ https://codeberg.org/hrbrmstr/cmu-ciso-dds-ddi โ I just used the opportunity of getting paid to do work to create something that will help everyone who needs to learn or care about threat intel. (3/6)
@hrbrmstr Itโs good of you to do so. Appreciate that mastodon is a particularly bad place for vocal anti-AI sentiment, so itโs understandable you donโt talk about it more.
On that front, and stay with me(!), Iโm pretty anti-LLM in a broad sense. I donโt believe that any technology is โjust a toolโ, so all the IP theft, exploited labour, de-skilling, environmental impact etc. plays a role in that position. At the same time, Iโve seen first hand where it can catch things we easily miss - like you say I think it can be good for code reviews - and so long as there is code generated for reproducibility I also fully expect there to be benefits for tasks like exploratory data analysis.
I share that because I think you already understand (and share) much of the negative sentiment it can draw out. Youโve probably had to deal with a bunch of rage bait responses, and I get it, thatโs super tiresome, but there are also people like me on here who may be a bit cranky but do accept thereโs a huge amount of value in hearing from someone like yourself. Someone very experienced who tries to use tools in a decent way. It can set a good example and bring some sanity and practicality to the hype.
But, I have been increasingly creating workflows and what Claude calls "skills" to offload some analysis tasks b/c it 100% can do them (the output still needs me to read through it b/c I won't be trusting LLMs to be autonomous any time soon) and I have more work than time and we're not rly in a position to spend $ @ work.
I never would have done that a year ago or even six months ago. (3/3)