My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for.

LLMs turn that upside down. They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem.

In any conversation I have with a person, I’m modeling their understanding of the topic at hand, trying to tailor my communication style to their needs. The same applies to programming languages and frameworks. If you work with a language the way its author intended it goes a lot easier.

But LLMs don’t have an understanding of the conversation. There is no intent. It’s just a mostly-likely-next-word generator on steroids. You’re trying to give directions to a lossily compressed copy of the entire works of human writing. There is no mind to model, and no predictability to the output.

If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering. LLMs are the final act of the finance bros and capitalists wrestling modern technology away from the technically literate proletariat who built it.

@EmilyEnough Had an interesting chat with the senior director at my office recently. He pointed out that as far as he can see, he already uses natural language to explain what he wants from software. This is just faster.
It was a perspective I hadn't considered before, but the more I think about it the more I think it's deeply insulting.
@rupert he is telling you flat out that he plans on replacing the expensive translation layer (you) asap. By and large that’s how the entire capital class sees this technology, as a way to eliminate expensive human labor without doing any actual work themselves.
@EmilyEnough I knew the plan. I just couldn't understand why he thought it would work.

@rupert @EmilyEnough

As a system architect, this is also what I do. The thing is, I absolutely depend on the people who do the implementation having good judgement. They need to fill in the gaps (if there were no gaps, I would have an implementation already) but also tell me if there are real problems with some of the ideas. This is why the first thing I do with a design is have it reviewed by people who will implement it. If they tell me ‘actually, this thing you forgot to consider is where our critical path is’ then that often leads to a complete redesign, or at least to significant change. The LLM will just produce something. With an ‘agentic’ loop and some automated testing, it will produce something that passes my tests. But it won’t tell me I’m solving the wrong problem.

I don’t have a problem with constrained nondeterminism in general. There are loads of places where this is fine. The place I used machine learning in my PhD was in prefetching. Get it right and everything is faster. Get it wrong and you haven’t lost much. This kind of asymmetry is great for ML-based probabilistic approaches: the benefit of a correct answer massively outweighs the cost of an incorrect one. The other place it works well is if you have a way of immediately validating the output. I supervised a student using some machine-learning techniques to find better orderings of passes for LLVM. They were tuning for code size (in a student project, this was easier than performance, which requires more testing). You run the old and new versions, one is smaller. That gives you an immediate signal and so using non-deterministic state-space exploration is great. You (probably) won’t get the optimal solution but you will get a good one, for far less effort than trying to reason about the behaviour of the interactions between dozens of transforms.

It’s not clear to me that LLMs for programming have either of these properties.