My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for.

LLMs turn that upside down. They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem.

In any conversation I have with a person, I’m modeling their understanding of the topic at hand, trying to tailor my communication style to their needs. The same applies to programming languages and frameworks. If you work with a language the way its author intended it goes a lot easier.

But LLMs don’t have an understanding of the conversation. There is no intent. It’s just a mostly-likely-next-word generator on steroids. You’re trying to give directions to a lossily compressed copy of the entire works of human writing. There is no mind to model, and no predictability to the output.

If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering. LLMs are the final act of the finance bros and capitalists wrestling modern technology away from the technically literate proletariat who built it.

@EmilyEnough This is a legitimate rant. There’s a lot of quicksand out there right now.

@EmilyEnough as a non-autistic person, they are also horrible at the other communication styles, since those require comprehension and intuition. Like, I can’t read what an LLM is getting at because it’s not getting at anything. It’s a parlor trick at best, with no memory and no real relationship with me.

And yeah, the whole point of computers was to have something more dependable and predictable than human capacities, especially in…computing. Like, it’s almost impressive to make a computer bad at computing.

@JoscelynTransient "as a non-autistic person" says the lady with a hyperfixation on a cartoon character she strives to personify
@twipped I swear it’s just the adhd…I swear! 🤭
@JoscelynTransient @twipped I mean... If you're going to have an autistic hyper-focus on an ADHD cartoon character, Harley Quinn is the one...
@JoscelynTransient @twipped As a total side note, I love how the HowToADHD girl has started an entire YouTube career around infodumping about ADHD coping strategies and hasn't come out as autistic yet.
@faithisleaping @twipped hey, adhd people also hyper focus and infodump! This isn’t only an autistic thing 😝

@faithisleaping @twipped going to leave this here too. Kinda don’t like where this joke is heading, cause it’s kind of miserable having people deny one’s ADHD symptoms 😕

https://chaosfem.tw/@JoscelynTransient/116279171368684989

Joscelyn Transpiring (@[email protected])

@[email protected] on a serious note though, I really am not on the spectrum. Seriously considered it, but I fail almost every aspect, from not getting overstimulated and needing to actively work to understand what causes that for my autistic friends to being guilty of failing to be explicit and direct enough with autistic friends and causing communication difficulties (thanks to living in Japan and Japanese language contexts for a number of years, I actually sometimes am too indirect for neurotypical USians even). A person can just be severely ADHD and have the overlapping manifestations like hyperfocus, communication difficulties with more neurotypically-aligned folks, and Infodumping. To be a bit more direct: It does feel a bit invalidating to have people insist ADHD doesn’t exist or have some of these manifestations, so would really encourage people not to do that please?

Chaosfem

@JoscelynTransient I'm sorry. I was just being playful. I didn't mean to deny your ADHD struggles. FWIW, I can usually clock an autistic person at a mile and I'm also pretty sure you're not. I'm just making dumb (and apparently insensitive) jokes.

@twipped

@faithisleaping @twipped oh, I knew you were joking around…it just was building really quickly in a direction that made me uncomfortable. I also suspect I might be on my period and extra sensitive right now 😅

And for the record, I would be very happy to also be autistic if that was the case. And I was giggling at first too with the jokes.

@JoscelynTransient I'm sitting here with both, and I can infodump in two different modes. They feel very different, even if they look the same from the outside.
@twipped @faithisleaping

@thatfrisiangirlish Okay, I'm very curious about this distinction, would you mind elaborating? Asking for, uhm, me.

@twipped @JoscelynTransient @faithisleaping

@anyia This is largely a work in progress for myself, as well, so there are some edges that I'm not too sure about myself, and it's definitely a subjective thing - this certainly works like that for me, but for yo or anyone else, I don't have the faintest idea.

Type A is mostly structured, and basically there to share something with you that I find extremely interesting. To do this justice, and to give you the full picture like you deserve, I have to give you the exhaistive rundown. I looked hard into this, and I'm just so excited to share this with you! This is mostly motivated by some need to share, meant to convey a complex bit of information, and I'll probably get upset if you're not excited, as well.

Type B is more exploratory, where I mostly verbalize the train of thought going on in my head. And believe you me, I can think and speak like an extremely pedantic text book. What I say draws on other things I know, but I am not quite sure where this one goes. This is mostly motivated by sharing my thoughts on a topic as they happen, meant to collaboratively work on a topic, but unfortunately, I'll get very upset if you cut into this, because that's cutting right into my thought process, and who likes to be interrupted just as you have an idea at the tip of your tongue.

Anyway, I don't know which belongs where, or even if they belong to specific neurotypes, but it is a hypothesis. From the outside, both probably feel quite like getting this text read at you at pretty high speed.

@twipped @JoscelynTransient @faithisleaping

@thatfrisiangirlish neat, thank you! I'd say I'm more likely to venture down path B. Path A feels like a lot of prep work. Or maybe it's a mix of the two? Often as I'm explaining something I realise I need to detour to provide necessary foundational knowledge, before returning to the first train of thought. Sometimes I get lost in the nesting.

@twipped @JoscelynTransient @faithisleaping

@anyia There's not really much specific preparation involved in Type A infodumps. Preparation in immersing myself enough to do it, yes, but I did that, anyway, because I wanted to for my own reasons?
For example, the genetics of horse coats. I could spontaneously give a little speech and presentation about the Leopard complex that feels like I only lack a presentation running in the background to give it on a stage somewhere, but that's just from the whole topic well understood and filed away. It was definitely a useful trait to have in university, when no one in the group actually prepared their part in the presentation, but since I understood what we were writing about...
@twipped @JoscelynTransient @faithisleaping

@twipped on a serious note though, I really am not on the spectrum. Seriously considered it, but I fail almost every aspect, from not getting overstimulated and needing to actively work to understand what causes that for my autistic friends to being guilty of failing to be explicit and direct enough with autistic friends and causing communication difficulties (thanks to living in Japan and Japanese language contexts for a number of years, I actually sometimes am too indirect for neurotypical USians even).

A person can just be severely ADHD and have the overlapping manifestations like hyperfocus, communication difficulties with more neurotypically-aligned folks, and Infodumping. To be a bit more direct: It does feel a bit invalidating to have people insist ADHD doesn’t exist or have some of these manifestations, so would really encourage people not to do that please?

@EmilyEnough I want to boost this a thousand times. This is so well written. I've wanted to express this for so long, but hadn't found the words. Thank you.

@EmilyEnough Wow, I have thought a lot about how coding LLMs are antithetical to my own OCD tendencies that want everything to be built and formatted in a very specific way (i.e. the right way), but had not considered how terrible the interface would be for folks who prefer not to have to process information conversationally.

I would love to read an entire book or series of articles about how LLMs as an interface enforce neurotypical modes of communication on neurodiverse people.

@mikemccaffrey @EmilyEnough The "you can write natural language queries" idea has always gotten a response from me of "why the fuck would I want to do that?" Standard search engine queries and stuff are so much easier.
@gourd @mikemccaffrey @EmilyEnough "I don't want to spend thirty minutes learning! I don't want to read a guide! I don't want to learn how to use a tool! I'm afraid of learning!"
People are taught to be uncurious & to be terrified of learning things now. Maybe the reason most people don't complain about search engines being nonfunctional now is because most people do not use search engines, libraries, or other methods of seeking information. They're ok with not knowing. They prefer to not know. Very dystopian.
@gourd @mikemccaffrey @EmilyEnough I completely agree, and what is "natural language" anyway?! Sounds like an ableist agenda, right?
@ennenine @gourd @mikemccaffrey @EmilyEnough I guess I'm the wrong kind of disabled because this is how search engines do work now

@mikemccaffrey Neurotypicality is just one of many biases that LLMs amplify. It also amplifies the latent racism, sexism, ableism, Western ideologies that dominate English language writing online, etc.

But until I read this post by @EmilyEnough , I didn’t realise what a neurodivergent torture device LLMs are. I think not enough has been written on that subject yet. My adult son is neurodivergent and an awesome programmer. He also hates LLMs with a passion. I’m now seeing how this all comes together.

@mikemccaffrey @EmilyEnough there’s a related situation (without all the other downsides): I often take scans of public domain sheet music and turn them into digital musical engravings (which you can then play, print, convert into Braille music, easily arrange, etc).

In the beginning, I thought it would be easier to take a digital score of the same piece from someone else and just fix bugs and remove and add things until it represents what I need (they are often minor arrangements), even wrote a cleanup XSLT to remove hidden "gems".

Turns out that looking through what others did is just so much harder that it’s faster to type in the whole thing from scratch (and I could use someone after me to look it over for my typos anyway in both cases).

@EmilyEnough

plagiarism laundering machine

Never thought about it like this, but it is indeed similar to alaundering.

You’re trying to give directions to a lossily compressed copy of the entire works of human writing.

This line sums up the futility extremely well.

@Evelyn Estelle Manifestos are fiction dressed up as facts. Particularly in the preambles, manifestos are a type of storytelling. The two famous manifesto templates, “The Communist Manifesto” and “The Founding And Manifesto Of Futurism,’ begin as stories. The communist manifesto begins with a ghost story (a specter is haunting Europe). The futurist manifesto starts with an account of how the manifesto was created (blackening reams of papers with our frenzied writing). The futurist movement also included a car chase (in 1910!). This ended with Marinetti crashing into the ditch and being pulled out by a group of fisherman. From that very spot the futurists proclaimed “Faces smeared with factory mucks, we shout aloud the word Futurism!�”
@EmilyEnough THIS oh my GODDESS 🤯   🙏🏼

@EmilyEnough "I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for."

In the 1970s, yes, when you wrote every single byte of code in the machine and could watch every bus cycle on a logic analyser.

I reckon the rot set in long before LLMs - I reckon it started with on-chip cache, so you could no longer see how each instruction operated through each clock cycle, because some instructions no longer needed to touch the bus at all.

@EmilyEnough there's no saving people, you know.
@EmilyEnough "squishy" computing

@EmilyEnough The one and only reason I ever got into computers back in the late 70s/early 80s was exactly this.

It was so refreshing to work with something that had a super specific and repeatable instruction set, where the *vast* majority of issues could be nailed down quite precisely to something I could control.

If I didn't like this sort of work, I wouldn't do it.

@EmilyEnough
The most interesting detail for my autistic brain is that the professional computertouchers like myself are technically proletatians but are paid like the managerial class (also called worker aristocracy).

So what we - imho - are witnessing is the proletarization of the computer workers. Like other jobs that used to be well paid before they got eliminated/recuperated.

@EmilyEnough very interesting observation, thanks a lot. I haven’t perceived it that way - I used to work a lot with probabilistic models, simulated annealing and genetic algorithms and in that case the computer works entirely deterministic but the result is always different. So I lost my expectation to a deterministic result long time ago ;-)
@EmilyEnough Very astute and exactly my experience too - I went into computing for the same kinds of reasons and as you say LLMs break that. Thank you for expressing it so clearly.

@EmilyEnough I think you're absolutely correct on this. Yet another reason why we need to find a way to irrevocably destroy this abomination.

But also it's not just the style of "communication" that these algorithms are pretending to do, it's that you cannot trust that their output is even correct because they have no understanding of what they are "saying". They could be "hallucinating" complete nonsense but they'll output it in an authoritative way and may even make up references that don't exist. They're 100% bullshit generators (it's even been scientifically proven).

@EmilyEnough

My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine

Also known as “training”. When people are trained in art, they don’t reinvent art from scratch. This is why you can’t really sue an LLM for plagiarism: you can’t even identify specific victims in the first place.

and disaster for the environment,

Nope. The whole IT sector uses about 3–5% of global electricity, so poor home insulation is a much bigger problem overall.

is that they introduce so much unpredictability into computing.

We call it a statistical method, or more precisely a stochastic system. Because, to a large extent, human behaviour itself can be modelled as a stochastic process.

If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering.

The problems you face when communicating with LLMs are the same ones you face when communicating with people, because statistically speaking an LLM mimics how people communicate.

This is why computer‑mediated communication was used before, and is still used, when computers were not trying to mimic humans.

The core issue is that mimicking humans reproduces the same communication problems people already have with one another; and the “unpredictability” of the other party is nothing new in human interaction.

LLMs mimic humans, so the problems you encounter with LLMs are the same problems you encounter with humans. The point is that you consider it normal when you face exactly the same issues with other people.

@uriel @EmilyEnough

> Nope. The whole IT sector uses about 3–5% of global electricity, so poor home insulation is a much bigger problem overall.

Source?

> We call it a statistical method, or more precisely a stochastic system. Because, to a large extent, human behaviour itself can be modelled as a stochastic process.

Source? In fact this is false. Human behaviour includes more than a stochastic process, even though it may adopt stochastic heuristics to speed up some computational parts. This is also why LLMs are technically speaking *not* AI. An AI includes, as human reasoning does, an internal world model and the basic set of Boolean probability-logic rules. See for instance Russell & Norvig's *Artificial Intelligence: A Modern Approach* (http://aima.cs.berkeley.edu/global-index.html), or Pearl's older *Probabilistic Reasoning in Intelligent Systems* (https://doi.org/10.1016/C2009-0-27609-4). LLMs are, instead, just Markov chains (https://doi.org/10.48550/arXiv.2410.02724). A modern robot vacuum cleaner is more "AI" than an LLM.

This is also the reason why the larger the software project you apply an LLM to, the more likely the failure. Such kind of application requires larger and larger string correlations, which are therefore more and more uncertain and fault-prone, and these faults are therefore also more difficult to spot. Such kind of applications may also require new or innovative kinds of solution, which again are less likely to be stumbled upon by an LLM.

> The problems you face when communicating with LLMs are the same ones you face when communicating with people, because statistically speaking an LLM mimics how people communicate.

No, because humans, and also *proper AI*, have a "logic engine" underneath. It may require some effort to bring the logic engine to the fore instead of poor heuristics, but it can be done (related: Kahneman's *Thinking, Fast and Slow*, and the research cited there). With LLM it can't be done because there's no logic engine at all there.

Artificial Intelligence: A Modern Approach, 4th Global ed.

@pglpm @EmilyEnough

Source?

Who Am I, your secretary? Just google.

here is my answer, complete.

https://keinpfusch.net/those-who-fear-ai/

Those Who Fear AI

As time goes by, a fundamentally harmless invention is taking hold, and its danger does not so much lie in what it actually does—after all, it is nothing more than a statistical language model—but rather in what people believe it does, or might one day do. In other

Das Böse Büro
@uriel @EmilyEnough
No, you're the one making the claim, so the onus is on you to give evidence.

@pglpm @EmilyEnough

ok, since you aren't able to, let me google for sources:

https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/

MIT says , 4.4%.

Arxiv is so full of shit, I don't even care. WARNING: next time you ask me to google something for you, since you are too stupid for , you must pay me.

@uriel @EmilyEnough

So:
- you make claims without supporting evidence,
- you simply dismiss as "full of shit" any evidence that's inconvenient to you,
- you just call others "stupid".

I don't know if you think you're smart, but with these traits the other people see very clearly that you aren't different from a flat-earther, and will treat your claims accordingly. Guess who's the one "full of shit".

Bye bye Mr Flat-Earth.

@EmilyEnough

"They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem."

OMG - That's perfect. Maybe also explains why everyone loves them that much. 🤨

@EmilyEnough There are so many "My biggest problem with LLM, even if it wasn't for <list of other big problems>", there should be collection of them somewhere.

But, yes, this bit bugs (pun intended) me and worries me. I'm more and more falling for BEAM family languages (Erlang, Elixir and Gleam) because of how they are designed to be as predictable as possible.

It may not be too odd that I see a lot less AI push in that ecosystem compared to other ones.

@EmilyEnough Well said. This could never have been LLM-generated. 🙂👍

@EmilyEnough this is a very justified rant

But the thought of computers being too autistic so people had to turn them neurotypical by adding llms is just so funny

@Chase @EmilyEnough yeah the concept of a neurotypical computer is forever living in my head rent free
@EmilyEnough Had an interesting chat with the senior director at my office recently. He pointed out that as far as he can see, he already uses natural language to explain what he wants from software. This is just faster.
It was a perspective I hadn't considered before, but the more I think about it the more I think it's deeply insulting.
@rupert he is telling you flat out that he plans on replacing the expensive translation layer (you) asap. By and large that’s how the entire capital class sees this technology, as a way to eliminate expensive human labor without doing any actual work themselves.
@EmilyEnough I knew the plan. I just couldn't understand why he thought it would work.

@rupert @EmilyEnough

As a system architect, this is also what I do. The thing is, I absolutely depend on the people who do the implementation having good judgement. They need to fill in the gaps (if there were no gaps, I would have an implementation already) but also tell me if there are real problems with some of the ideas. This is why the first thing I do with a design is have it reviewed by people who will implement it. If they tell me ‘actually, this thing you forgot to consider is where our critical path is’ then that often leads to a complete redesign, or at least to significant change. The LLM will just produce something. With an ‘agentic’ loop and some automated testing, it will produce something that passes my tests. But it won’t tell me I’m solving the wrong problem.

I don’t have a problem with constrained nondeterminism in general. There are loads of places where this is fine. The place I used machine learning in my PhD was in prefetching. Get it right and everything is faster. Get it wrong and you haven’t lost much. This kind of asymmetry is great for ML-based probabilistic approaches: the benefit of a correct answer massively outweighs the cost of an incorrect one. The other place it works well is if you have a way of immediately validating the output. I supervised a student using some machine-learning techniques to find better orderings of passes for LLVM. They were tuning for code size (in a student project, this was easier than performance, which requires more testing). You run the old and new versions, one is smaller. That gives you an immediate signal and so using non-deterministic state-space exploration is great. You (probably) won’t get the optimal solution but you will get a good one, for far less effort than trying to reason about the behaviour of the interactions between dozens of transforms.

It’s not clear to me that LLMs for programming have either of these properties.

@david_chisnall @rupert @EmilyEnough

"This kind of asymmetry is great for ML-based probabilistic approaches: the benefit of a correct answer massively outweighs the cost of an incorrect one."
@david_chisnall

Good god. Not if the incorrect answer leads to the mass death of the innocent. Which it most always does.
ST

"Evil knows no ideology or boundary, only an eloquent stance behind them."
SearingTruth