My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for.

LLMs turn that upside down. They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem.

In any conversation I have with a person, I’m modeling their understanding of the topic at hand, trying to tailor my communication style to their needs. The same applies to programming languages and frameworks. If you work with a language the way its author intended it goes a lot easier.

But LLMs don’t have an understanding of the conversation. There is no intent. It’s just a mostly-likely-next-word generator on steroids. You’re trying to give directions to a lossily compressed copy of the entire works of human writing. There is no mind to model, and no predictability to the output.

If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering. LLMs are the final act of the finance bros and capitalists wrestling modern technology away from the technically literate proletariat who built it.

@EmilyEnough

My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine

Also known as “training”. When people are trained in art, they don’t reinvent art from scratch. This is why you can’t really sue an LLM for plagiarism: you can’t even identify specific victims in the first place.

and disaster for the environment,

Nope. The whole IT sector uses about 3–5% of global electricity, so poor home insulation is a much bigger problem overall.

is that they introduce so much unpredictability into computing.

We call it a statistical method, or more precisely a stochastic system. Because, to a large extent, human behaviour itself can be modelled as a stochastic process.

If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering.

The problems you face when communicating with LLMs are the same ones you face when communicating with people, because statistically speaking an LLM mimics how people communicate.

This is why computer‑mediated communication was used before, and is still used, when computers were not trying to mimic humans.

The core issue is that mimicking humans reproduces the same communication problems people already have with one another; and the “unpredictability” of the other party is nothing new in human interaction.

LLMs mimic humans, so the problems you encounter with LLMs are the same problems you encounter with humans. The point is that you consider it normal when you face exactly the same issues with other people.

@uriel @EmilyEnough

> Nope. The whole IT sector uses about 3–5% of global electricity, so poor home insulation is a much bigger problem overall.

Source?

> We call it a statistical method, or more precisely a stochastic system. Because, to a large extent, human behaviour itself can be modelled as a stochastic process.

Source? In fact this is false. Human behaviour includes more than a stochastic process, even though it may adopt stochastic heuristics to speed up some computational parts. This is also why LLMs are technically speaking *not* AI. An AI includes, as human reasoning does, an internal world model and the basic set of Boolean probability-logic rules. See for instance Russell & Norvig's *Artificial Intelligence: A Modern Approach* (http://aima.cs.berkeley.edu/global-index.html), or Pearl's older *Probabilistic Reasoning in Intelligent Systems* (https://doi.org/10.1016/C2009-0-27609-4). LLMs are, instead, just Markov chains (https://doi.org/10.48550/arXiv.2410.02724). A modern robot vacuum cleaner is more "AI" than an LLM.

This is also the reason why the larger the software project you apply an LLM to, the more likely the failure. Such kind of application requires larger and larger string correlations, which are therefore more and more uncertain and fault-prone, and these faults are therefore also more difficult to spot. Such kind of applications may also require new or innovative kinds of solution, which again are less likely to be stumbled upon by an LLM.

> The problems you face when communicating with LLMs are the same ones you face when communicating with people, because statistically speaking an LLM mimics how people communicate.

No, because humans, and also *proper AI*, have a "logic engine" underneath. It may require some effort to bring the logic engine to the fore instead of poor heuristics, but it can be done (related: Kahneman's *Thinking, Fast and Slow*, and the research cited there). With LLM it can't be done because there's no logic engine at all there.

Artificial Intelligence: A Modern Approach, 4th Global ed.

@pglpm @EmilyEnough

Source?

Who Am I, your secretary? Just google.

here is my answer, complete.

https://keinpfusch.net/those-who-fear-ai/

Those Who Fear AI

As time goes by, a fundamentally harmless invention is taking hold, and its danger does not so much lie in what it actually does—after all, it is nothing more than a statistical language model—but rather in what people believe it does, or might one day do. In other

Das BĂśse BĂźro
@uriel @EmilyEnough
No, you're the one making the claim, so the onus is on you to give evidence.

@pglpm @EmilyEnough

ok, since you aren't able to, let me google for sources:

https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/

MIT says , 4.4%.

Arxiv is so full of shit, I don't even care. WARNING: next time you ask me to google something for you, since you are too stupid for , you must pay me.

@uriel @EmilyEnough

So:
- you make claims without supporting evidence,
- you simply dismiss as "full of shit" any evidence that's inconvenient to you,
- you just call others "stupid".

I don't know if you think you're smart, but with these traits the other people see very clearly that you aren't different from a flat-earther, and will treat your claims accordingly. Guess who's the one "full of shit".

Bye bye Mr Flat-Earth.