My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for.

LLMs turn that upside down. They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem.

In any conversation I have with a person, I’m modeling their understanding of the topic at hand, trying to tailor my communication style to their needs. The same applies to programming languages and frameworks. If you work with a language the way its author intended it goes a lot easier.

But LLMs don’t have an understanding of the conversation. There is no intent. It’s just a mostly-likely-next-word generator on steroids. You’re trying to give directions to a lossily compressed copy of the entire works of human writing. There is no mind to model, and no predictability to the output.

If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering. LLMs are the final act of the finance bros and capitalists wrestling modern technology away from the technically literate proletariat who built it.

@EmilyEnough I completely agree. This rant inspired a tangential thought. There’s a article “ChatGPT is Bullshit” that talks a lot how LLMs are bullshit generators. It starts with Harry Frankfurt’s famous essay “On Bullshit,” which defines bullshit as distinct from lying. As I recall, a lie requires 2 things: some reference to the truth (you can’t lie without knowing that what you’re saying isn’t true); and some intent. It argues that a liar needs intent and a bullshitter doesn’t care.

It’s clear that LLMs have no reference to something like truth. That’s easy. But intent? The article makes a decent case that LLMs have a built in intent: deception. Pretending to be human is their intent. They “intend” to write words that are very human like. So do they have intent? Maybe. It’s part of why all the best uses of LLMs are around fraud.

I thought this might be an interesting slight pivot off the idea that they don’t have intent. You’re right: they don’t have it like a human, who presumably has some point; some reason for writing what they write. But maybe there is a latent intent.

https://link.springer.com/article/10.1007/s10676-024-09775-5

ChatGPT is bullshit - Ethics and Information Technology

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

SpringerLink