@ennenine

84 Followers
91 Following
1.4K Posts

My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for.

LLMs turn that upside down. They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem.

In any conversation I have with a person, I’m modeling their understanding of the topic at hand, trying to tailor my communication style to their needs. The same applies to programming languages and frameworks. If you work with a language the way its author intended it goes a lot easier.

But LLMs don’t have an understanding of the conversation. There is no intent. It’s just a mostly-likely-next-word generator on steroids. You’re trying to give directions to a lossily compressed copy of the entire works of human writing. There is no mind to model, and no predictability to the output.

If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering. LLMs are the final act of the finance bros and capitalists wrestling modern technology away from the technically literate proletariat who built it.

"We are born under circumstances that would be favorable if we did not abandon them. It was nature's intention that there should be no need of great equipment for a good life: every individual can make himself happy."

-Seneca

"they emphasised that a human could also give erroneous advice."

I dunno, seems like humans don't do this as often or as impactfully. And when they do there are consequences. When will LLMs receive their pink slips?

https://www.theguardian.com/technology/2026/mar/20/meta-ai-agents-instruction-causes-large-sensitive-data-leak-to-employees

Meta AI agent’s instruction causes large sensitive data leak to employees

Artificial intelligence agent instructed engineer to take actions that exposed user and company data internally

The Guardian

Need some more happiness in your feed? I highly recommend @eclectech - makes me smile daily.

@eclectech https://things.uk/@eclectech/116263959767407861

eclectech (@[email protected])

Attached: 1 image *waves* #sillyScribbles #featherFolk

things.uk
Stop the massive data center in Stroudsburg, PA

Can you spare a minute to help this campaign?

Change.org

Forgive me
For being jealous
Of your ribs
For how they
Cradle your
Heart.

#poetry

Many videos don't age well, but this one aged TOO well. I feel like #WhiskeyPete saw this and didn't realize it was sarcastic:

https://www.youtube.com/watch?v=UrgpZ0fUixs&list=RDUrgpZ0fUixs

Denis Leary - Asshole (Uncensored Version)

YouTube
Related ... "Hacked" is doing some pretty heavy lifting. More like "morons kept default passwords on public systems which are also documented publicly online"

This is both hilarious, as well as a perfect illustration of how society needs to slow down with tech until companies and municipalities can figure out how to be far more responsible than they currently are.

https://youtu.be/Wy1oyOnro3g?si=qeIw8g3FZ8i3Suap

Denver fixes hacked crosswalk audio that played anti-Trump messages

YouTube