I just do not have the time to write anything long-form about this but the ongoing Mozilla AI debacle is really indicative of a very, very troubling aspect of the broader AI debacle, which is that a strong majority of even the *actually* well-intentioned, smart leaders in tech have had their brains fully cooked by these heuristics machines
putting the sociopathic billionaires to one side for a moment, there are lots of smaller tech leaders who *are* in fact congruent with the kinder, gentler stereotype from the aughts of really smart, clever people who think fast and talk fast and try to make a "win-win" scenario out of their business
the problem is, this kind of person, who is exemplified by a lot of mozilla leadership, has to have a particular orientation towards leadership and problem solving. tech filters aggressively for time-to-market, which means it filters for leaders who are comfortable making confident decisions under uncertainty
this means that good leaders develop a way to kind of vibe out a technology, to quickly develop a partial understanding based on several carefully-chosen data points. even at the best of times, they become overconfident in their facility with this skill, because they are disproportionately rewarded for it. but also a lot of them *do* have a track record of trying out emerging tech and developing a sufficient model of it by intelligently interpolating between the features they can quickly test
LLMs are *absolute concentrated brain poison* for these folks. They try out the LLM to see if it can solve a few simple problems and then they extrapolate to more complex problems. Wrongly. They infer from social cues in their cohort, which are absolutely fucked by the amount of synthetic money (and maybe fraud?) driving a subprime-bubble type mania. They infer from the plausibility of its outputs, which are absolutely fucked because the job of these models is to produce plausible outputs.
lest we feel superior in *our* ability to clock LLM garbage, this disaster among elites is a microcosm of something even worse that LLMs and their sister technologies of shitcoins and spambots are harbingers of: we *all* have heuristics that we need to use to make sense of the world, and heuristics work because of an implicit presumption of good faith in many interactions. most presumptions can be hacked if some significant plurality of actors in a system are always seeking maximum advantage
like, consider open source projects. casual code review works because you assume the code is written by a human and you presume that person is taking pride in their work. if you prompt a spambot to emit maximum PRs under your name so you can pad out your resume and land one of the 7 remaining jobs at FAANG, to stand out among the 5,000 applicants for every opening, the reviewer has no chance. they can't possibly contend with that volume of junk. projects will shut down, eventually.
even with an equivalent volume of PRs, code reviewers *must* use heuristics to judge how much effort to pour into any given line of code. what's the contributor's reputation, do you trust them to know what's going on, what's going on in this file, is this a tricky area of the code, are there subtle mistakes to look out for. spambots flatten the probability of a mistake into a uniform statistical distribution instead of following human patterns
so you have to be equally vigilant on every single line of code, which is way harder than you think (even if you already know that it's pretty hard). and you have to exercise that level of vigilance *because* the contributor didn't care enough to pay attention to the code themselves. it is a recipe for burnout.
the same is true of social heuristics. you might think of me as an AI critic, negative to the point of cynical viciousness. but, dear reader, I am embedded in the same social fabric as these tech leaders. I have friends I respect tremendously caught up to varying degrees with the AI bubble; it's impossible not to be. While I certainly have active social links to other skeptics, I would REALLY like to believe that my other friends are doing good, meaningful work, ethically.
so, to the extent that I am biased, I am actually biased in the *opposite* direction, actively looking for an "out" and willing to meet people more than halfway. it just so happens that LLMs are, as a wise person once said, "shit from a butt", and my *particular* heuristics do not allow for many handwaving shortcuts in this specific area
I am really learning to dislike being right about everything all the time, but like, not in a way that lets me make a billion dollars on a bold short position https://www.businessinsider.com/executives-adopting-ai-higher-rates-than-workers-research-2025-10
Executives are adopting AI at higher rates than employees, study says

Research from HR software company Dayforce suggests that executives are leaning into AI far more than their employees.

Business Insider
@glyph I’m only being mildly glib when I say these are the people who deserve to be replaced by AI.
@glyph management sends out these mails in brainrot chatgpt english and i just feel bad / embarrassed for them. judging them so hard... and i'm from the 70s. what must the kids think?
@mcc correct opinion
@mcc I think that this extends even further, for example into education. While fully recognizing that most educators will not have the resources, time, skills or institutional power to do this, I also think that if you are a teacher assigning homework or assessments to students who can submit passable responses with an LLM, the long-term solution is to fundamentally rethink what the curriculum should be, and whether the assessments are conveying anything meaningful
@mcc I appreciate the necessity of stopgaps like β€œmake students write essays in proctored environments with pen and paper” but they must be viewed *as* unfortunateβ€”and temporaryβ€”stopgaps. if an LLM can do the homework it’s not good homework.

@glyph @mcc
We still teach everyone arithmetic and multiplication tablesΒΉ despite calculators being ubiquitous to a fault. We don't teach things only because you need to know them. We also do because it's foundational to learn the things you do need to know.

Writing structured short form text is foundational to a lot of other skills, and I don't see it going away. Especially as your test format has always been the norm in many places already.

1) I'm in my 50s - I *hope* we're still doing it.

@jannem @mcc My contention is not "if it can be automated, it's bad homework". It's specifically about LLMs. If a calculator can do it, you still need to know how to do it to understand what the calculator is doing. But if an LLM can do it, it's probably getting a wrong or content-free answer, which means your rubric *accepts wrong or content-free answers*.
@glyph @jannem I figure for a lot of basic writing exercise is repetitive and probably well represented in a language model, I wouldn't expect grade school or even high school writing assignments to have much in the way of novel content
@raven667 @glyph
Nobody needs one more essay on spring, or the value of small talk, or Napoleons Russian campaign. The content isn't the point. The act of gathering your thoughts and synthesizing it into a coherent, readable text is.
@raven667 @glyph
With that said, I was an exchange student in the US many years ago, and I was surprised just how rigid and formulaic the language classes were. An essay couldn't be structured to fit the subject; it was regulated down to how many paragraphs to write and what each paragraph should contain. Not fun; and - I'd argue - not a great way to learn to write.

@glyph @mcc I agree with "if a work product can be effectively produced by an LLM, then that work doesn't need to be done", but as a former CS professor I'd argue that student output is not the work product, student understanding is, and the point of assessments is to measure that understanding -- "university instructor" has a surprising amount of skill transfer to "tech lead", but if I wanted to build software, I'd never get every single junior on my team to independently reimplement a component I'd already built. On the other hand, "how would you implement this component I built last quarter" is an interview question I've used, but again the point of interviewing is assessing understanding.

The trouble with standard interview/assignment/exam questions is that they need to be simple enough to complete in a short time window (or an LLM context window), while still demonstrating some of the essential domain complexity, and alternative assessment measures require substantially more effort on the part of the assessor and/or assessee.

You would not believe the number of hours my wife spent checking references in her students' essays last term, but it turns out "put in correct page numbers" is easy to do if you've actually done the research, and nigh-impossible for an LLM. For interviewing I'm not sure: longer interview processes are an option, but that's hard on candidates; you could try a more internship/apprenticeship model of training & recruiting, but that still leaves you with the question of how you select the interns (also, PhD programs provide many examples of the failure modes of malicious or incompetent mentors).

The thing I worry about with "LLM-resistant assessments take substantially more labour" is access: given fixed investment, organizations will have to reduce the pool of people they provide opportunities to, which tilts the opportunities available even further toward those with existing wealth or connections.

@bruceiv @mcc

1. yep it's complicated and fundamentally we aren't allocating enough resources to educators to robustly defend against this, I had tons of qualifications in my post already for that reason

2. see also https://mastodon.social/@glyph/115629882612242081

3. this mostly reduces to the "forklift at the gym" argument which I also agree with

4. best of luck to your spouse, this is a rough time to be a teacher

@glyph I encountered an enthusiast citing the Anthropic stat about the amount of AI code they were publishing, and while it wasn't in a situation where I needed to respond, my immediate thought was "So you're thinking the statements of a publisher of largely greenfield code with no regulatory oversight, the ability to devolve blame to their customers, and a strong vested interest should be taken at face value when judging suitability for updating existing applications under strict regulations?"
@ancoghlan I am being pretty charitable in the thread because a charitable read does exist, but it is simultaneously true that a lot of people are just blasting out their metacognitive deficits on main
@glyph OK, I confess that was my second thought. My first thought was "It's fuckin' brainworms, man". Encountering genuine enthusiasm has the perverse effect of pushing me from my usual position of "There is something interesting here that's worth exploring further" to "It's a goddamn mind virus that must be purged with fire".
@ancoghlan @glyph it’s weird being on the other side of this. We have customers writing support tickets about how much they would like to do home decoration via natural language and we can’t make the LLMs actually do the job. I sometimes wish I had what rose colored glasses are being handed out.

@coderanger @ancoghlan @glyph

This becomes more relevant by the day... 🀦

https://www.youtube.com/watch?v=RiSIS3jcaXU

:|

How vibe coders ruined everything

YouTube

@BillySmith @coderanger @ancoghlan @glyph

Brilliant. Same category as this little piece, my all-time favourite.

https://www.youtube.com/watch?v=BKorP55Aqvg

The Expert (Short Comedy Sketch)

Subscribe for more short comedy sketches & films: http://bit.ly/laurisb Buy Expert shirts & hoodies at https://laurisb.myshopify.com/ Funny business meeting ...

YouTube

@Brokar @coderanger @ancoghlan @glyph

I remember watching this when I was working in consultancy, and it made me cringe.

One of the main mistakes from that work, was where the "expert" tried to directly answer the stupid questions, instead of saying, "That's an extended question that we would need to bring another expert on board to answer." Then bringing in an academic consultant who specialised in physics... :))

It would have allowed a lot more billable hours... :))

Clients From Hell Stories- Funny & True Stories | NotAlwaysRight.com

A collection of client horror stories from designers and freelancers on CFH.

Funny & True Stories | NotAlwaysRight.com
@coderanger @ancoghlan @glyph That's the thing. If this tech _actually_ worked, it _would_ be a major game-changer!
@glyph incredibly punctuated thread
@SnoopJ if I'm going to have a major depressive episode in public it might as well have some syncopation
@SnoopJ (thanks)
@glyph I personally feel better having read the words. I'm more of a hardliner I think in terms of my willingness to find a meeting place in the middle but it helps to have a framework to hang that on and understand that my heuristics are tweaked in such-and-such a way relative to yours and relative to those of enthusiasts (especially the "quickly gauge and react" type you enumerate)
@SnoopJ @glyph I'm also a hardliner. But the thread above really points out the necessity of thinking about others in good faith: if I were to judge people in my circle for using or believing in LLMs, I might as well become a hermit and move to a cottage in the woods.
@pkraus @SnoopJ @glyph To be honest, hermit life in a cottage in the woods sounds more and more appealing as the days go by.

@datarama @pkraus @SnoopJ @glyph πŸ’―

I was in tech during the dotcom bubble, and I smelled the trouble before my financial advisor did. 😨

This sure smells bad. I'm glad I'm approaching an age where I can actually become a luddite. πŸ‘©β€πŸ¦³

@deborahh @pkraus @SnoopJ @glyph I'm right in that age where retirement is far away, but it'd be seriously uphill to retrain for something else.

@deborahh @pkraus @SnoopJ @glyph (To be honest, I can't really think of what I could retrain *to*, if the AI industry ends up eating software development.

I hate all of this so much.)

@datarama I hear ya.
I left software decades ago. I'm a skilled, well-regarded professional coach. And yet, AI is even cannibalising this, a most human profession. Why? Because my clients mainly are in tech & the shiny AI object has been offered in place of my skillset. #sigh

I hope to grit my teeth and wait it out. Signs are starting to appear of the bubble at least becoming visible, if not bursting.

I pray it bursts soon-soon. πŸ™ So we can all get back to using our skillsets, creating value.

@pkraus @glyph I'm definitely judging (how could I not? those heuristics are getting updated whether I want to or not) but the blast radius of my personal opinions is only so large. What I think doesn't really factor much into anybody else's choice to use or not-use the tools.

I try not to be a jerk about it, but I would be lying if I said my opinion of those people's work isn't shifting.

@pkraus @SnoopJ @glyph well if everyone decided to get a cottage in the woods I think we would all be better off
@glyph kind or not, it seems to me a very harsh lesson is to be learned sooner rather than later. My main worry is that the immense price for that lesson won't be paid by smart, kind, tech leaders, and certainly not by the tech billionaires.
@glyph I think you just did write something long form, at least as far as social media is concerned.

@glyph β€œβ€¦shit from a butt…”

LLM is short for LLMWACP (low carb mousse made with avocado and cocoa powder)?