My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for.

LLMs turn that upside down. They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem.

In any conversation I have with a person, I’m modeling their understanding of the topic at hand, trying to tailor my communication style to their needs. The same applies to programming languages and frameworks. If you work with a language the way its author intended it goes a lot easier.

But LLMs don’t have an understanding of the conversation. There is no intent. It’s just a mostly-likely-next-word generator on steroids. You’re trying to give directions to a lossily compressed copy of the entire works of human writing. There is no mind to model, and no predictability to the output.

If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering. LLMs are the final act of the finance bros and capitalists wrestling modern technology away from the technically literate proletariat who built it.

@EmilyEnough This is a legitimate rant. There’s a lot of quicksand out there right now.
@EmilyEnough I want to boost this a thousand times. This is so well written. I've wanted to express this for so long, but hadn't found the words. Thank you.

@EmilyEnough Wow, I have thought a lot about how coding LLMs are antithetical to my own OCD tendencies that want everything to be built and formatted in a very specific way (i.e. the right way), but had not considered how terrible the interface would be for folks who prefer not to have to process information conversationally.

I would love to read an entire book or series of articles about how LLMs as an interface enforce neurotypical modes of communication on neurodiverse people.

@mikemccaffrey @EmilyEnough The "you can write natural language queries" idea has always gotten a response from me of "why the fuck would I want to do that?" Standard search engine queries and stuff are so much easier.
@gourd @mikemccaffrey @EmilyEnough "I don't want to spend thirty minutes learning! I don't want to read a guide! I don't want to learn how to use a tool! I'm afraid of learning!"
People are taught to be uncurious & to be terrified of learning things now. Maybe the reason most people don't complain about search engines being nonfunctional now is because most people do not use search engines, libraries, or other methods of seeking information. They're ok with not knowing. They prefer to not know. Very dystopian.
@gourd @mikemccaffrey @EmilyEnough I completely agree, and what is "natural language" anyway?! Sounds like an ableist agenda, right?
@ennenine @gourd @mikemccaffrey @EmilyEnough I guess I'm the wrong kind of disabled because this is how search engines do work now

@mikemccaffrey Neurotypicality is just one of many biases that LLMs amplify. It also amplifies the latent racism, sexism, ableism, Western ideologies that dominate English language writing online, etc.

But until I read this post by @EmilyEnough , I didn’t realise what a neurodivergent torture device LLMs are. I think not enough has been written on that subject yet. My adult son is neurodivergent and an awesome programmer. He also hates LLMs with a passion. I’m now seeing how this all comes together.

@mikemccaffrey @EmilyEnough there’s a related situation (without all the other downsides): I often take scans of public domain sheet music and turn them into digital musical engravings (which you can then play, print, convert into Braille music, easily arrange, etc).

In the beginning, I thought it would be easier to take a digital score of the same piece from someone else and just fix bugs and remove and add things until it represents what I need (they are often minor arrangements), even wrote a cleanup XSLT to remove hidden "gems".

Turns out that looking through what others did is just so much harder that it’s faster to type in the whole thing from scratch (and I could use someone after me to look it over for my typos anyway in both cases).

@EmilyEnough

plagiarism laundering machine

Never thought about it like this, but it is indeed similar to alaundering.

You’re trying to give directions to a lossily compressed copy of the entire works of human writing.

This line sums up the futility extremely well.

@EmilyEnough THIS oh my GODDESS 🤯   🙏🏼

@EmilyEnough "I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for."

In the 1970s, yes, when you wrote every single byte of code in the machine and could watch every bus cycle on a logic analyser.

I reckon the rot set in long before LLMs - I reckon it started with on-chip cache, so you could no longer see how each instruction operated through each clock cycle, because some instructions no longer needed to touch the bus at all.

@EmilyEnough "squishy" computing

@EmilyEnough
The most interesting detail for my autistic brain is that the professional computertouchers like myself are technically proletatians but are paid like the managerial class (also called worker aristocracy).

So what we - imho - are witnessing is the proletarization of the computer workers. Like other jobs that used to be well paid before they got eliminated/recuperated.

@EmilyEnough very interesting observation, thanks a lot. I haven’t perceived it that way - I used to work a lot with probabilistic models, simulated annealing and genetic algorithms and in that case the computer works entirely deterministic but the result is always different. So I lost my expectation to a deterministic result long time ago ;-)
@EmilyEnough Very astute and exactly my experience too - I went into computing for the same kinds of reasons and as you say LLMs break that. Thank you for expressing it so clearly.

@EmilyEnough I think you're absolutely correct on this. Yet another reason why we need to find a way to irrevocably destroy this abomination.

But also it's not just the style of "communication" that these algorithms are pretending to do, it's that you cannot trust that their output is even correct because they have no understanding of what they are "saying". They could be "hallucinating" complete nonsense but they'll output it in an authoritative way and may even make up references that don't exist. They're 100% bullshit generators (it's even been scientifically proven).

@evildrganymede This post is a hallucination. It's weird how concepts people came up with 2 years ago and have been since disproven are repeated as fact. You're not an LLM, but here you are, bullshitting because you need updated training. Not sure why you're better, I guess because you have authority as a human being, and have totally mislead us... that's better?

@EmilyEnough

My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine

Also known as “training”. When people are trained in art, they don’t reinvent art from scratch. This is why you can’t really sue an LLM for plagiarism: you can’t even identify specific victims in the first place.

and disaster for the environment,

Nope. The whole IT sector uses about 3–5% of global electricity, so poor home insulation is a much bigger problem overall.

is that they introduce so much unpredictability into computing.

We call it a statistical method, or more precisely a stochastic system. Because, to a large extent, human behaviour itself can be modelled as a stochastic process.

If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering.

The problems you face when communicating with LLMs are the same ones you face when communicating with people, because statistically speaking an LLM mimics how people communicate.

This is why computer‑mediated communication was used before, and is still used, when computers were not trying to mimic humans.

The core issue is that mimicking humans reproduces the same communication problems people already have with one another; and the “unpredictability” of the other party is nothing new in human interaction.

LLMs mimic humans, so the problems you encounter with LLMs are the same problems you encounter with humans. The point is that you consider it normal when you face exactly the same issues with other people.

@uriel @EmilyEnough

> Nope. The whole IT sector uses about 3–5% of global electricity, so poor home insulation is a much bigger problem overall.

Source?

> We call it a statistical method, or more precisely a stochastic system. Because, to a large extent, human behaviour itself can be modelled as a stochastic process.

Source? In fact this is false. Human behaviour includes more than a stochastic process, even though it may adopt stochastic heuristics to speed up some computational parts. This is also why LLMs are technically speaking *not* AI. An AI includes, as human reasoning does, an internal world model and the basic set of Boolean probability-logic rules. See for instance Russell & Norvig's *Artificial Intelligence: A Modern Approach* (http://aima.cs.berkeley.edu/global-index.html), or Pearl's older *Probabilistic Reasoning in Intelligent Systems* (https://doi.org/10.1016/C2009-0-27609-4). LLMs are, instead, just Markov chains (https://doi.org/10.48550/arXiv.2410.02724). A modern robot vacuum cleaner is more "AI" than an LLM.

This is also the reason why the larger the software project you apply an LLM to, the more likely the failure. Such kind of application requires larger and larger string correlations, which are therefore more and more uncertain and fault-prone, and these faults are therefore also more difficult to spot. Such kind of applications may also require new or innovative kinds of solution, which again are less likely to be stumbled upon by an LLM.

> The problems you face when communicating with LLMs are the same ones you face when communicating with people, because statistically speaking an LLM mimics how people communicate.

No, because humans, and also *proper AI*, have a "logic engine" underneath. It may require some effort to bring the logic engine to the fore instead of poor heuristics, but it can be done (related: Kahneman's *Thinking, Fast and Slow*, and the research cited there). With LLM it can't be done because there's no logic engine at all there.

Artificial Intelligence: A Modern Approach, 4th Global ed.

@pglpm @EmilyEnough

Source?

Who Am I, your secretary? Just google.

here is my answer, complete.

https://keinpfusch.net/those-who-fear-ai/

Those Who Fear AI

As time goes by, a fundamentally harmless invention is taking hold, and its danger does not so much lie in what it actually does—after all, it is nothing more than a statistical language model—but rather in what people believe it does, or might one day do. In other

Das Böse Büro
@uriel @EmilyEnough
No, you're the one making the claim, so the onus is on you to give evidence.

@pglpm @EmilyEnough

ok, since you aren't able to, let me google for sources:

https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/

MIT says , 4.4%.

Arxiv is so full of shit, I don't even care. WARNING: next time you ask me to google something for you, since you are too stupid for , you must pay me.

@uriel @EmilyEnough

So:
- you make claims without supporting evidence,
- you simply dismiss as "full of shit" any evidence that's inconvenient to you,
- you just call others "stupid".

I don't know if you think you're smart, but with these traits the other people see very clearly that you aren't different from a flat-earther, and will treat your claims accordingly. Guess who's the one "full of shit".

Bye bye Mr Flat-Earth.

@EmilyEnough

"They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem."

OMG - That's perfect. Maybe also explains why everyone loves them that much. 🤨

@EmilyEnough There are so many "My biggest problem with LLM, even if it wasn't for <list of other big problems>", there should be collection of them somewhere.

But, yes, this bit bugs (pun intended) me and worries me. I'm more and more falling for BEAM family languages (Erlang, Elixir and Gleam) because of how they are designed to be as predictable as possible.

It may not be too odd that I see a lot less AI push in that ecosystem compared to other ones.

@EmilyEnough Well said. This could never have been LLM-generated. 🙂👍

@EmilyEnough this is a very justified rant

But the thought of computers being too autistic so people had to turn them neurotypical by adding llms is just so funny

@Chase @EmilyEnough yeah the concept of a neurotypical computer is forever living in my head rent free
@EmilyEnough As your fellow ND professional computer toucher, I'm 100% with you - the unpredictability drives me batty. If I want a RNG I'll call one - what I intend to be deterministic should be, verifiably, repeatably. Lipsticked pig LLMs have snuck into what I have to do for work and beating one's head against that BS is a good way to eventually flame the fuck out of tech. Corporate controlled computing was a mistake.

@EmilyEnough

To me, the whole self-declared #AI industry is a massive financial scam. As someone else wrote: 'it is the industry pushing multi-billion dollar solutions to million dollar problems'.

@EmilyEnough I completely agree. This rant inspired a tangential thought. There’s a article “ChatGPT is Bullshit” that talks a lot how LLMs are bullshit generators. It starts with Harry Frankfurt’s famous essay “On Bullshit,” which defines bullshit as distinct from lying. As I recall, a lie requires 2 things: some reference to the truth (you can’t lie without knowing that what you’re saying isn’t true); and some intent. It argues that a liar needs intent and a bullshitter doesn’t care.

It’s clear that LLMs have no reference to something like truth. That’s easy. But intent? The article makes a decent case that LLMs have a built in intent: deception. Pretending to be human is their intent. They “intend” to write words that are very human like. So do they have intent? Maybe. It’s part of why all the best uses of LLMs are around fraud.

I thought this might be an interesting slight pivot off the idea that they don’t have intent. You’re right: they don’t have it like a human, who presumably has some point; some reason for writing what they write. But maybe there is a latent intent.

https://link.springer.com/article/10.1007/s10676-024-09775-5

ChatGPT is bullshit - Ethics and Information Technology

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

SpringerLink
@EmilyEnough Yeah, very telling that the people most excited about LLM seem to be middle managers and C-levels: people adept at the "waffling about" conversations.

@EmilyEnough thank you, I can absolutely relate to that! ❤️

the struggle that coworkers / managers don't see ambiguity or inaccuracy in requirements that they wanted me to write software for seems to be the same lack of understanding when talking with the same people about software produced by LLMs. they seem to favor "something but faster" over "correct thing" and when pointed out, the "solution" seems to be to generate multiple iteration until finally reaching a "good enough" version. this is absolutely not how I understand my profession.

@winniehell This is a reason why I doubt LLMs will take over my job. I still have the same job to do of phrasing words well enough to get the software do the work well. Only the programming language changed 🤷🏻‍♂️

@EmilyEnough

@EmilyEnough I feel this so much! :)
@EmilyEnough Another thing is that it seems to hijack the thinking autonomy of a lot of people. People defer to an LLM instead of putting the struggle and effort into researching and learning. I'm not anti-convenience, but when we don't need to think about things anymore, the brain's thinking facilities just atrophy.
@wallabra @EmilyEnough This isn't unique to LLMs. I've seen people defer to an Excel spreadsheet that plainly had been built with faulty assumptions.
@DocBohn @EmilyEnough That is true! People defer to things they shouldn't all the time. I just think LLMs are the next level of this, one that's about to be way worse, and way more societally impactful, than any before. I mean, look at what it's doing to primary education, like smartphones - the shiny silicon tablets designed to a tee to trap your attention - didn't do enough damage to it already.
@EmilyEnough “ You’re trying to give directions to a lossily compressed copy of the entire works of human writing.” — Perfect.
@EmilyEnough @drahardja
Exactly. I too need my automations to be deterministic. The element of surprise is fine for a novel, but not for a health care integration.
@EmilyEnough To paraphrase some random professional in the industry no one cares about: "If English and other natural languages were specific enough to describe tasks to a computer, we wouldn't have invented programming languages, and bugs wouldn't happen." (Uncle Bob)

Some people just refuse to understand that you can't solve all of your problems by speaking English.
@EmilyEnough this makes me wonder if only NT people fall for LLMs, because ND people just take one look and go "this thing is a liar"

@EmilyEnough I think llm's have some buttons and knobs, but it is mostly self-reflection. I'm astonished how fast it picks a vibe(not necessarily what I want or need, but it is a possible reaction).
I can see that 2 people get completely different results just because they frase their Idee different.
It reminds me of Google search geniuses and when Google not just had the mainstream results.

But I think it is a hype and used extractive. I don't see reasonable use from the commercial side.

I felt the knowledge of the world were at my fingertips and there were so many authentic, unique and interesting people/projects on the internet(2008).

Today the internet is to much for me.
I can't even find the things that I know exist. And I think it is because of semantic search, my google-fu does not work anymore. It is frustrating for me, but that's how it is today and likely I just have to bend to majority needs/capitalism.

@EmilyEnough
How right and on point you are

@EmilyEnough I think they only have a future and indeed utility when 1) run locally, 2) being based not on stolen data and 3) being highly customized to a specific task (there’s a few tasks I find them useful for, e.g. searching a text corpus with very vague terms)

and definitely not with a subservient chatbot userinterface

@EmilyEnough (fwiw I think that ALL of the “AI” companies are some form of investment scam)
@EmilyEnough I have a slightly different view. An LLM has some of the same language processing issues that I do, to the point that “I have LLM brain” is a useful cognitive model. It makes them surprisingly easy to “play” for me. The ability to take something I don’t understand and rewrite it into something else that aligns better with the corpus of normals-thought is definitely useful to me for understanding how normal communicate and bypassing my own limitations there.
@EmilyEnough all that said I don’t use the slop for anything other than finding my own way to say things.

@EmilyEnough I'm stressing so hard over this... Like I've got 19 years of experience, senior engineer, went through the pipeline of:
- company over-relies on telemetry and fails to make product better
- blindly invests in ai to try and save themselves
- shit hits fan and mass layoffs

And honestly I'm not sure if I've got any job prospects in my future, in a field that's prioritizing getting it "done" regardless if the engineers understand the code they're committing.