
| Blog | https://shendriks.dev |
| Github | https://github.com/shendriks |
| STA$D400 @ Bandcamp | https://stad400.bandcamp.com |
| Codeberg | https://codeberg.org/shendriks |
| Blog | https://shendriks.dev |
| Github | https://github.com/shendriks |
| STA$D400 @ Bandcamp | https://stad400.bandcamp.com |
| Codeberg | https://codeberg.org/shendriks |

Marketing in 2026
https://michael-simons.eu/p/marketing-in-2026.html
> Mal im Ernst: Ich bin überzeugt davon, dass normale Menschen uns und unser Umfeld für komplett bescheuert erklären werden, wenn sie wüssten, was einige von uns täglich veröffentlichen.
RE: https://mastodon.social/@fn0rd/116253301472991885
What if AI is a cult and no bubble?
Maybe it's the fact that I am working atm on a quite interesting and also challenging task at work and, but even if it wouldn't be the case: The ambition to develop something in the evening and the weekends as side project, building something for myself or maybe others to share: It just got killed over the last year. Like "ok, nice, you have been a good input, now you're just a code slinger…" Yeah, sure.
Maybe it comes back, maybe it doesn't. Feels weird and not particularly good either way.
My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for.
LLMs turn that upside down. They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem.
In any conversation I have with a person, I’m modeling their understanding of the topic at hand, trying to tailor my communication style to their needs. The same applies to programming languages and frameworks. If you work with a language the way its author intended it goes a lot easier.
But LLMs don’t have an understanding of the conversation. There is no intent. It’s just a mostly-likely-next-word generator on steroids. You’re trying to give directions to a lossily compressed copy of the entire works of human writing. There is no mind to model, and no predictability to the output.
If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering. LLMs are the final act of the finance bros and capitalists wrestling modern technology away from the technically literate proletariat who built it.