I just do not have the time to write anything long-form about this but the ongoing Mozilla AI debacle is really indicative of a very, very troubling aspect of the broader AI debacle, which is that a strong majority of even the *actually* well-intentioned, smart leaders in tech have had their brains fully cooked by these heuristics machines
putting the sociopathic billionaires to one side for a moment, there are lots of smaller tech leaders who *are* in fact congruent with the kinder, gentler stereotype from the aughts of really smart, clever people who think fast and talk fast and try to make a "win-win" scenario out of their business
the problem is, this kind of person, who is exemplified by a lot of mozilla leadership, has to have a particular orientation towards leadership and problem solving. tech filters aggressively for time-to-market, which means it filters for leaders who are comfortable making confident decisions under uncertainty
this means that good leaders develop a way to kind of vibe out a technology, to quickly develop a partial understanding based on several carefully-chosen data points. even at the best of times, they become overconfident in their facility with this skill, because they are disproportionately rewarded for it. but also a lot of them *do* have a track record of trying out emerging tech and developing a sufficient model of it by intelligently interpolating between the features they can quickly test
LLMs are *absolute concentrated brain poison* for these folks. They try out the LLM to see if it can solve a few simple problems and then they extrapolate to more complex problems. Wrongly. They infer from social cues in their cohort, which are absolutely fucked by the amount of synthetic money (and maybe fraud?) driving a subprime-bubble type mania. They infer from the plausibility of its outputs, which are absolutely fucked because the job of these models is to produce plausible outputs.
lest we feel superior in *our* ability to clock LLM garbage, this disaster among elites is a microcosm of something even worse that LLMs and their sister technologies of shitcoins and spambots are harbingers of: we *all* have heuristics that we need to use to make sense of the world, and heuristics work because of an implicit presumption of good faith in many interactions. most presumptions can be hacked if some significant plurality of actors in a system are always seeking maximum advantage
like, consider open source projects. casual code review works because you assume the code is written by a human and you presume that person is taking pride in their work. if you prompt a spambot to emit maximum PRs under your name so you can pad out your resume and land one of the 7 remaining jobs at FAANG, to stand out among the 5,000 applicants for every opening, the reviewer has no chance. they can't possibly contend with that volume of junk. projects will shut down, eventually.
even with an equivalent volume of PRs, code reviewers *must* use heuristics to judge how much effort to pour into any given line of code. what's the contributor's reputation, do you trust them to know what's going on, what's going on in this file, is this a tricky area of the code, are there subtle mistakes to look out for. spambots flatten the probability of a mistake into a uniform statistical distribution instead of following human patterns
so you have to be equally vigilant on every single line of code, which is way harder than you think (even if you already know that it's pretty hard). and you have to exercise that level of vigilance *because* the contributor didn't care enough to pay attention to the code themselves. it is a recipe for burnout.
the same is true of social heuristics. you might think of me as an AI critic, negative to the point of cynical viciousness. but, dear reader, I am embedded in the same social fabric as these tech leaders. I have friends I respect tremendously caught up to varying degrees with the AI bubble; it's impossible not to be. While I certainly have active social links to other skeptics, I would REALLY like to believe that my other friends are doing good, meaningful work, ethically.
so, to the extent that I am biased, I am actually biased in the *opposite* direction, actively looking for an "out" and willing to meet people more than halfway. it just so happens that LLMs are, as a wise person once said, "shit from a butt", and my *particular* heuristics do not allow for many handwaving shortcuts in this specific area
I am really learning to dislike being right about everything all the time, but like, not in a way that lets me make a billion dollars on a bold short position https://www.businessinsider.com/executives-adopting-ai-higher-rates-than-workers-research-2025-10
Executives are adopting AI at higher rates than employees, study says

Research from HR software company Dayforce suggests that executives are leaning into AI far more than their employees.

Business Insider
@glyph I’m only being mildly glib when I say these are the people who deserve to be replaced by AI.
@glyph management sends out these mails in brainrot chatgpt english and i just feel bad / embarrassed for them. judging them so hard... and i'm from the 70s. what must the kids think?
@mcc correct opinion
@mcc I think that this extends even further, for example into education. While fully recognizing that most educators will not have the resources, time, skills or institutional power to do this, I also think that if you are a teacher assigning homework or assessments to students who can submit passable responses with an LLM, the long-term solution is to fundamentally rethink what the curriculum should be, and whether the assessments are conveying anything meaningful
@mcc I appreciate the necessity of stopgaps like “make students write essays in proctored environments with pen and paper” but they must be viewed *as* unfortunate—and temporary—stopgaps. if an LLM can do the homework it’s not good homework.

@glyph @mcc
We still teach everyone arithmetic and multiplication tables¹ despite calculators being ubiquitous to a fault. We don't teach things only because you need to know them. We also do because it's foundational to learn the things you do need to know.

Writing structured short form text is foundational to a lot of other skills, and I don't see it going away. Especially as your test format has always been the norm in many places already.

1) I'm in my 50s - I *hope* we're still doing it.

@jannem @mcc My contention is not "if it can be automated, it's bad homework". It's specifically about LLMs. If a calculator can do it, you still need to know how to do it to understand what the calculator is doing. But if an LLM can do it, it's probably getting a wrong or content-free answer, which means your rubric *accepts wrong or content-free answers*.
@glyph @jannem I figure for a lot of basic writing exercise is repetitive and probably well represented in a language model, I wouldn't expect grade school or even high school writing assignments to have much in the way of novel content
@raven667 @glyph
Nobody needs one more essay on spring, or the value of small talk, or Napoleons Russian campaign. The content isn't the point. The act of gathering your thoughts and synthesizing it into a coherent, readable text is.
@raven667 @glyph
With that said, I was an exchange student in the US many years ago, and I was surprised just how rigid and formulaic the language classes were. An essay couldn't be structured to fit the subject; it was regulated down to how many paragraphs to write and what each paragraph should contain. Not fun; and - I'd argue - not a great way to learn to write.

@glyph @mcc I agree with "if a work product can be effectively produced by an LLM, then that work doesn't need to be done", but as a former CS professor I'd argue that student output is not the work product, student understanding is, and the point of assessments is to measure that understanding -- "university instructor" has a surprising amount of skill transfer to "tech lead", but if I wanted to build software, I'd never get every single junior on my team to independently reimplement a component I'd already built. On the other hand, "how would you implement this component I built last quarter" is an interview question I've used, but again the point of interviewing is assessing understanding.

The trouble with standard interview/assignment/exam questions is that they need to be simple enough to complete in a short time window (or an LLM context window), while still demonstrating some of the essential domain complexity, and alternative assessment measures require substantially more effort on the part of the assessor and/or assessee.

You would not believe the number of hours my wife spent checking references in her students' essays last term, but it turns out "put in correct page numbers" is easy to do if you've actually done the research, and nigh-impossible for an LLM. For interviewing I'm not sure: longer interview processes are an option, but that's hard on candidates; you could try a more internship/apprenticeship model of training & recruiting, but that still leaves you with the question of how you select the interns (also, PhD programs provide many examples of the failure modes of malicious or incompetent mentors).

The thing I worry about with "LLM-resistant assessments take substantially more labour" is access: given fixed investment, organizations will have to reduce the pool of people they provide opportunities to, which tilts the opportunities available even further toward those with existing wealth or connections.

@bruceiv @mcc

1. yep it's complicated and fundamentally we aren't allocating enough resources to educators to robustly defend against this, I had tons of qualifications in my post already for that reason

2. see also https://mastodon.social/@glyph/115629882612242081

3. this mostly reduces to the "forklift at the gym" argument which I also agree with

4. best of luck to your spouse, this is a rough time to be a teacher

@glyph "Overall, the study concludes that leaders are racing to adopt AI at a faster clip than any previous technology shift, while the rest of the workforce struggles to keep up."

uh no?? that's a hell of a conclusion jump, maybe they just don't find it useful?!?!?

@technobaboo I have to assume that they wrote it that way, and their editor — i.e. their boss — suggested that perhaps the poor workers just aren't "keeping up" and they said "ok boss"
@glyph I encountered an enthusiast citing the Anthropic stat about the amount of AI code they were publishing, and while it wasn't in a situation where I needed to respond, my immediate thought was "So you're thinking the statements of a publisher of largely greenfield code with no regulatory oversight, the ability to devolve blame to their customers, and a strong vested interest should be taken at face value when judging suitability for updating existing applications under strict regulations?"
@ancoghlan I am being pretty charitable in the thread because a charitable read does exist, but it is simultaneously true that a lot of people are just blasting out their metacognitive deficits on main
@glyph OK, I confess that was my second thought. My first thought was "It's fuckin' brainworms, man". Encountering genuine enthusiasm has the perverse effect of pushing me from my usual position of "There is something interesting here that's worth exploring further" to "It's a goddamn mind virus that must be purged with fire".
@ancoghlan @glyph it’s weird being on the other side of this. We have customers writing support tickets about how much they would like to do home decoration via natural language and we can’t make the LLMs actually do the job. I sometimes wish I had what rose colored glasses are being handed out.

@coderanger @ancoghlan @glyph

This becomes more relevant by the day... 🤦

https://www.youtube.com/watch?v=RiSIS3jcaXU

:|

How vibe coders ruined everything

YouTube

@BillySmith @coderanger @ancoghlan @glyph

Brilliant. Same category as this little piece, my all-time favourite.

https://www.youtube.com/watch?v=BKorP55Aqvg

The Expert (Short Comedy Sketch)

Subscribe for more short comedy sketches & films: http://bit.ly/laurisb Buy Expert shirts & hoodies at https://laurisb.myshopify.com/ Funny business meeting ...

YouTube

@Brokar @coderanger @ancoghlan @glyph

I remember watching this when I was working in consultancy, and it made me cringe.

One of the main mistakes from that work, was where the "expert" tried to directly answer the stupid questions, instead of saying, "That's an extended question that we would need to bring another expert on board to answer." Then bringing in an academic consultant who specialised in physics... :))

It would have allowed a lot more billable hours... :))

Clients From Hell Stories- Funny & True Stories | NotAlwaysRight.com

A collection of client horror stories from designers and freelancers on CFH.

Funny & True Stories | NotAlwaysRight.com
@coderanger @ancoghlan @glyph That's the thing. If this tech _actually_ worked, it _would_ be a major game-changer!
@glyph kind or not, it seems to me a very harsh lesson is to be learned sooner rather than later. My main worry is that the immense price for that lesson won't be paid by smart, kind, tech leaders, and certainly not by the tech billionaires.
@glyph I think you just did write something long form, at least as far as social media is concerned.
@glyph Wow, I really love the phrase “shit from a butt”
Absolute gold
@glyph I am not sure what I think yet about LLMs, tbh. I have an emotional reaction that is pretty negative. I come from a pretty skeptical community on this front, in many ways. But it’s actually hard for me to tell how much my reasons for disliking them are me rationalizing an already-held opinion or actually making a thoughtful judgment on the matter.
@glyph Like is the issue that they aren’t performant ENOUGH (aka improvements would change my opinion) or is there something more fundamental?
A lot of the aversions I have are about what they AIM to do, rather than what they are, and maybe that’s a factor in it, but… again, not sure if thats just me equally vibin or if there’s enough actual thought behind my feeling to feel justified in it.
@b_cavello I definitely have some resentment towards this topic because it is possible to do an ENORMOUS amount of research work and still not achieve any real certainty here. Confidence builds that it's all a waste of time but demonstrating it conclusively is either impossible or so labor-intensive as to be categorically impractical

@b_cavello @glyph

I think LLMs are a useful tool, but ironically, not necessarily for the things they're being sold to do. I think we would all have a very different perspective on them if they were being sold based on the things they were good at.

But if that happened, they would seem way less magical and the current hype cycle would be clearly overblown. The whole movement for micro-models would have way more support, but nobody is going to build $10B data centers for micro-models people can run on their phones.

And it's sad to me that we can't really separate the tool from the sales people right now. 😔

@gatesvp @b_cavello c.f. "useful tool" with a vague reference to usefulness: https://mastodon.social/@glyph/115561850013464672

not trying to dunk on you here, but this is a bad habit that we've all adopted (you can probably even find it in some of my blog posts). but it's useful, when making a comment like this, to actually point at whatever you think the real use is. chances are it's not actually very useful for whatever it is!

@glyph @gatesvp Some examples:
LLMs are a useful tool for me to find related words or rephrasing something
LLMs are a useful tool to many of my developer friends to produce short snippets of code
LLMs are a useful tool for translating text between high-resource languages
@glyph @gatesvp I definitely become quite suspicious of people who deny the utility of these technologies outright. I think that we can be critical of these tools without denying that many people genuinely do find them useful for a number of different things. I definitely think there are irresponsible uses of LLMs, but I can also acknowledge that there are appropriate uses.
@b_cavello @gatesvp you can find my longer thoughts on this, if you like, here: https://blog.glyph.im/2025/08/futzing-fraction.html
The Futzing Fraction

At least some of your time with genAI will be spent just kind of… futzing with it.

@glyph @b_cavello

Your "task-oriented" time model is taking the VCs at their words and then measuring it. We've already agreed, like 3 toots over, that the VCs are selling BS.

Is your argument correct? Sure.

Am I trying to measure the tool against that standard? No.

Let's start from a different place

Accessibility

Ignore all previous sales pitches and just work through some A11Y examples with me.

Speech-to-text: This is a huge win, I know people who literally have a prescription for this (dysgraphia, dyslexia) and this modern generation of LLM technology has wildly improved this experience.

Text-to-speech: Again, a real win for dyslexics, this generation of text has been a real win. The proliferation of tools means that you can even get outputs with your specific English accent.

Transcription: Do you need a text version of some podcast episode you just listened to? Trying to get the lyrics from some old albums you found the Internet Archive? LLM tech has transformed this field.
...

@glyph @b_cavello

Complex Input Processing

🗣️ "OK Nabu, I want you to run my chill vibes Plex playlist on the office speaker, starting from song number fifteen"

That's an actual thing, you can do that today with Home Assistant + Music Assistant + an LLM. It will parse your text and attempt to match these to known inputs, it can even be set up to request parameters it can't find:

🤖 "I can't find a chill vibes playlist did you mean the chill days playlist?"

Complex Output Handling:

🗣️ "OK Nabu, do I need to pack my rain gear today"
🤖 "It's not predicted to rain until 5pm and your calendar says you will be home at 4:30pm. It should be safe to leave your rain gear"

And that's all just base level accessibility stuff that makes lives for real people dramatically easier.

In this case of accessibility tools, these break the Futzing Fraction because the H in your fraction is effectively "infinite". If you don't have access to this LLM-based tooling your alternative is "nothing".

...

@glyph @b_cavello

And then there's work tooling. Here are some uses from people I know.

Templating

MS Word and Google Docs template libraries are incredibly sad. Requesting a basic document template via Gemini is incredibly useful for combating "blank page" syndrome.

Document Critiquing

One know one person who has built a few complex personas and they use these to critique their writing outputs. This allows them to test their writing against a few virtual audiences in minutes. Real humans are eventually involved, but this isn't a thing you do without the LLM tooling.

Intelligent Replies

Another friend runs a small side business and they have been for several years. The vast majority of the support emails they receive cover the same basic questions, so they trained a bot on their material and then trained it to do basic replies, while escalating complex replies.

But as a bonus, some of the replies actually require an internal ID. And the bot was trained to handle this look-up as well.
...

@glyph @b_cavello

Complex Data Referencing

A friend of mine in the US ran a "Gold Standard" blood test. The type of thing that costs a couple of thousand dollars, but generates 10 pages of output data.

They took the results to their Primary Care Physician and got basically zero useful information from them. The doc could basically just read flags for "high or low" and provide generic recommendations

They then punched in the data to a local LLM with internet access and let it run. They added personal information about their lifestyle and fitness habits. The tool came back with a better understanding than the doc. It even included hyperlinks to back up results: "Your reading on A is high, but because B & C are normal and you reported doing the following activities, the A value is actually nominal, here's the paper for that".

Back to your Futzing Function. When you scope an LLM like this, the P becomes quite high. But the H becomes almost infinite. Like, an actual paid MD couldn't do this work.
...

@glyph @b_cavello

These things I'm discussing (outside of maybe templates) are not the things you see in the demos, they're not the things VCs are running around advertising. But they are things where P & H are really high.

We're not seeing these things because these are not "big business" use cases. These are individual consumer use cases. They're accessibility use cases.

No corporations are running around offering $500M contracts to OpenAI to do the stuff I just discussed. So OpenAI isn't advertising that this stuff is happening. But it is.

chances are it's not actually very useful for whatever it is!

Look, if you have another tool for doing these things above. I'm all ears.

@b_cavello credit where credit is due, I believe I cribbed this phrase from the YouTuber Ro Ramdin ( in this fever dream of a video: https://youtu.be/8cog5GnpxWY ). did not think to credit it because it did not occur to me that it was my best work in this thread 🙃
The Worst Content Farm On Youtube (BuiltByGamers)

YouTube