Choose distributed.
he/him
#iroh #deltachat
| home | https://devork.be |
| github | https://github.com/flub |
| home | https://devork.be |
| github | https://github.com/flub |
imagine how much calmer our lives would be if apps and websites would fucking stop logging us out all the time
at this point i’m reacting with primal rage and have to stop myself from yelling “WHY!?” at all the GitLabs, Sentries, LWNs, Blueskies, and let’s not even get started with airlines
End-to-End Encryption is good but metadata protection counts as much. Names, group descriptions and memberships, avatars, who talks to whom ...
Both #deltachat and #signal go to great length to protect all the metadata that WhatsApp grants itself gratuitously. #Matrix stores similar scales of metadata on their servers, even if you can choose which server stores it.
Everything is better than #Telegram which additionally stores message contents in all group chats/channels and most 1:1 chats.
Pfizer just announced that its Lyme disease vaccine reduced the number of tick-borne infections by over 70% in a phase 3 trial.
Warning: GSK pulled a Lyme vaccine in 2002 despite studies showing it posed no serious safety risks.
Any new Lyme vaccine is going to be a huge target for antivaxxers.
My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for.
LLMs turn that upside down. They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem.
In any conversation I have with a person, I’m modeling their understanding of the topic at hand, trying to tailor my communication style to their needs. The same applies to programming languages and frameworks. If you work with a language the way its author intended it goes a lot easier.
But LLMs don’t have an understanding of the conversation. There is no intent. It’s just a mostly-likely-next-word generator on steroids. You’re trying to give directions to a lossily compressed copy of the entire works of human writing. There is no mind to model, and no predictability to the output.
If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering. LLMs are the final act of the finance bros and capitalists wrestling modern technology away from the technically literate proletariat who built it.
