My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for.

LLMs turn that upside down. They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem.

In any conversation I have with a person, I’m modeling their understanding of the topic at hand, trying to tailor my communication style to their needs. The same applies to programming languages and frameworks. If you work with a language the way its author intended it goes a lot easier.

But LLMs don’t have an understanding of the conversation. There is no intent. It’s just a mostly-likely-next-word generator on steroids. You’re trying to give directions to a lossily compressed copy of the entire works of human writing. There is no mind to model, and no predictability to the output.

If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering. LLMs are the final act of the finance bros and capitalists wrestling modern technology away from the technically literate proletariat who built it.

@EmilyEnough Wow, I have thought a lot about how coding LLMs are antithetical to my own OCD tendencies that want everything to be built and formatted in a very specific way (i.e. the right way), but had not considered how terrible the interface would be for folks who prefer not to have to process information conversationally.

I would love to read an entire book or series of articles about how LLMs as an interface enforce neurotypical modes of communication on neurodiverse people.

@mikemccaffrey @EmilyEnough The "you can write natural language queries" idea has always gotten a response from me of "why the fuck would I want to do that?" Standard search engine queries and stuff are so much easier.
@gourd @mikemccaffrey @EmilyEnough "I don't want to spend thirty minutes learning! I don't want to read a guide! I don't want to learn how to use a tool! I'm afraid of learning!"
People are taught to be uncurious & to be terrified of learning things now. Maybe the reason most people don't complain about search engines being nonfunctional now is because most people do not use search engines, libraries, or other methods of seeking information. They're ok with not knowing. They prefer to not know. Very dystopian.
@gourd @mikemccaffrey @EmilyEnough I completely agree, and what is "natural language" anyway?! Sounds like an ableist agenda, right?

@mikemccaffrey Neurotypicality is just one of many biases that LLMs amplify. It also amplifies the latent racism, sexism, ableism, Western ideologies that dominate English language writing online, etc.

But until I read this post by @EmilyEnough , I didn’t realise what a neurodivergent torture device LLMs are. I think not enough has been written on that subject yet. My adult son is neurodivergent and an awesome programmer. He also hates LLMs with a passion. I’m now seeing how this all comes together.

@mikemccaffrey @EmilyEnough there’s a related situation (without all the other downsides): I often take scans of public domain sheet music and turn them into digital musical engravings (which you can then play, print, convert into Braille music, easily arrange, etc).

In the beginning, I thought it would be easier to take a digital score of the same piece from someone else and just fix bugs and remove and add things until it represents what I need (they are often minor arrangements), even wrote a cleanup XSLT to remove hidden "gems".

Turns out that looking through what others did is just so much harder that it’s faster to type in the whole thing from scratch (and I could use someone after me to look it over for my typos anyway in both cases).