35 Followers
219 Following
3.6K Posts

God gave me lactose tolerance, and to deny me cheese is to deny his great purpose.

Loves server hosting, studying network engineering.

Profile image: two kitties napping in sun

Gendera
PronounsAny

#PSA: posting photos and videos of your kids online ensures they'll never be able to meaningfully opt out of privacy invasion.

80% of children have an online presence by age two, with parents sharing an average of 1,500 images before their fifth birthday. β€”2017, Northumbria University

By the age of 13, children have had an average of 1,300 photos and videos of themselves posted to social media by their parents. β€”2018, UK Children's Commissioner

#Privacy #DataPrivacy

If you can see this post then you will be executed by my pet cat at 21.43 UTC on 24 March 2026
Apparently there is a study that confirms exactly what I've been saying this whole time, but with a twist.

That LLMs hallucinating is a fundamental problem and it can not be fixed no matter how advanced their models get. Their best bet would be to make the model respond something like "I don't know" when they don't have an answer. But apparently they train their models so that they are unable to say things like "I don't know the answer".

This is because if they did, they fear people would stop using AI because they'd find it useless.

I still firmly believe the problem is in the nature of the technology used and not the fact that they are trained to lie. These algorithms were designed to predict things, not to know things. They predict an output based on an input. And if you give them a question they will predict an answer that sounds like an average human answer to that question. If your question is an average question then you will likely get an accurate, average answer. But the more your quesiton deviates from that average, the more likely the answer will also deviate from the average, accurate answer.

https://arxiv.org/abs/2509.04664
Yeah, this other article sums up exactly what I was saying.

"In a landmark study, OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits."

"The researchers demonstrated that hallucinations stemmed from statistical properties of language model training rather than implementation flaws. The study established that β€œthe generative error rate is at least twice the IIV misclassification rate,” where IIV referred to β€œIs-It-Valid” and demonstrated mathematical lower bounds that prove AI systems will always make a certain percentage of mistakes, no matter how much the technology improves."

But of course, you read, "Market is already adapting". WHY!? Why does the market have to adapt to the biggest scam ever created!? If it's unreliable, STOP USING AI. PERIOD.

https://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html
"...sorry, we, did so much more than just 'gender', as you put it. Despite the brevity of our relationship, we knew each other quite well, at least I thought we did. Our last night together was magical, I will never forget it. That feeling of lying there, staring into each other's eyes, the entire world in our hands. I wish that night could've lasted forever; the perfect end to a perfect summer. When i woke up the next day, she was gone. I searched for years after that, trying to find her. I wondered where she went, and of she ever found what she was looking for. When i finally saw her again she had completely changed from the person I fell in love with. She had moved on, and she didn't recognize me anymore. I kept asking myself if she was like this the entire time and i was just blind to it. Maybe everyone else i knew was right. Maybe that relationship was more just about me than it was her. Maybe the person i should have been chasing after was myself. Maybe I... hardly knew her."
This is one of the main aspects of my philosophical opposition to "generative AI" and large language models. I don't care how "useful" they might be. Making my life easier or more productive isn't a sufficient justification to submit myself to a system that fundamentally does not respect anyone's unique experience and perspective. It's a system that's biased to enforce cultural conformity and stagnation, rather than embracing diversity and evolution.
In ancient Egypt, around 1200 B.C. a craftsman texted a person named Khay:
β€œLet there be brought some fresh goose fat directly, very, very quickly because the cat has eaten that which was brought to me yesterday.”

​​​​​​​​
"brew install systemd"

words of the utterly deranged
true HOLY FUCKING SHIT moment this morning: over the holidays, my sticker guys were having a special on red octagons, so I designed and ordered 250 "Slop Sign" stickers β€” but while I was in China, someone stole them out of my mailbox. Bummer, I figured; they probably trashed them as worthless. anyway while walking to the subway I FOUND ONE OF MY STICKERS STUCK TO A MAILBOX! Whoever swiped my stickers is sticking them up around the neighborhood! This is the best thing that could have happened!

Welllllll this isn't great.

Google Just Patented The End Of Your Website

"...a system that evaluates your company’s landing page in real time and, if it decides the page won’t perform well enough for a specific user, replaces it with an AI-generated version assembled on the fly. The user never sees what your team built, they see what Google's machine learning model thinks they should see instead."

https://www.forbes.com/sites/joetoscano1/2026/03/06/google-just-patented-the-end-of-your-website

#SEO #Google #AI #enshittification

Google Just Patented The End Of Your Website

A newly granted Google patent could let the search giant replace your brand's landing page with an AI-generated version you have no control over and only your buyers see.

Forbes