I did not expect a 60kw essay would keep my attention:
https://knightcolumbia.org/content/ai-as-social-technology

"Rather, they are families of messy statistical models, trained on large data sets. Training a statistical model for good on-average performance implies trading worse performance in rare situations for better performance in common situations. Such trade-offs have unfortunate implications for the capacity of all such models to handle situations raised by small groups of people, by those less well-represented in the training corpora, or indeed by anything genuinely novel."

Perhaps I'm motivated by my brother in the middle of losing a friend down the chatbot rabbithole.

#LLMsAreBullshit

AI as Social Technology

Knight First Amendment Institute

A long thread in which a proposed remedy for LLM code poisoning is utterly dismantled. Provenance is *central* to GPL, BSD, etc. software licenses.

https://social.coop/@cwebber/116426152872895154

"There is *NO WAY* in current LLM technology, nor I believe from studying how neural networks work, any viable computationally performant LLM, that they can track provenance. The BY clause cannot be upheld."

#FLOSS
#LLMsAreBullshit

Christine Lemmer-Webber (@[email protected])

@[email protected] @[email protected] @[email protected] So let me summarize: - Without knowing the legal status of accepting LLM contributions, we're potentially polluting our codebases with stuff that we are going to have a HELL of a time cleaning up later - The idea of a copyleft-only LLM is a joke and we should not rely on it - We really only have two realistic scenarios: either FOSS projects cannot accept LLM based contributions legally from an international perspective, or everything is effectively in the public domain as outputted from these machines, but at least in the latter scenario we get to weaken copyright for everyone. That's leaving out a lot of other considerations about LLMs and the ethics of using them, which I think most of the other replies were focused on, I largely focused on the copyright implications aspects in this subthread. Because yes, I agree, it can be important to focus a conversation. But we can't ignore this right now. We're putting FOSS codebases at risk.

social.coop

@990000 That's so facepalm it's elbow.

#LLMSAreBullshit

It's nice to be reminded that we're not the last to consider ethics. What we do matters, and what we accept as normal also matters.

https://www.garfieldtech.com/blog/selfish-ai

#LLMsAreBullshit #LLMsAreNotAI

Selfish AI | GarfieldTech

LLMs are not AI,, though they use a couple of methods common in AI. I am surprised at how angry I get thinking about how real AI (genetic/ant/simulated annealing/etc) is probably going to get tarred with the same brush when it comes crashing down.

https://www.thegist.ie/guest-gist-2026-our-already-rotting-future/
#LLMsAreBullshit

Guest Gist: 2026, Our Already Rotting Future

Seamus O'Reilly warns of AI maximum bubbledrive this year.

The Gist

@cazabon Just reading through the survey start page is aneurism-inducing. At first, I thought i was reading LLM slop, but then I realized that all of the "definitions" were just aspirational wishlist items.

Propaganda is putting it mildly. This smells like a deputy minister has already decided what's going to happen.

#LLMsAreBullshit

When Sam Altman claims "Someone" will lose a "phenomenal amount of money," I have some ideas (and maybe a small amount of hope) about who that could be.

https://arstechnica.com/information-technology/2025/08/sam-altman-calls-ai-a-bubble-while-seeking-500b-valuation-for-openai/

#LLMsAreBullshit

Is the AI bubble about to pop? Sam Altman is prepared either way.

“Someone will lose a phenomenal amount of money,” says CEO while fundraising at record prices.

Ars Technica

Absolutely pleased with the current state of this as yet incomplete course:
https://thebullshitmachines.com/

e.g.
"But what happens when LLMs flood the internet with phony narratives from people who don’t exist? We can no longer trust anonymous speech to be genuine.

To believe what we read, we require authentication: ways of knowing that the authors we are reading and the people we are talking to are real.

But authentication threatens anonymity. If we want to ensure that the stories we read are coming from real people, we will end up excluding certain stories as they become no longer safe to tell."

#LLMsAreBullshit

Modern-Day Oracles or Bullshit Machines: Introduction

A free online humanities course about how to learn and work and thrive in an AI world.

I assert that:

Any task that, in the course of normal execution, can benefit from an "AI assistant", can be improved by streamlining it until the assistant is no longer useful.

Don't feel obligated to try to change my mind.

#AI #LLM #LLMsAreBullshit #ArtificialIntelligenceIsnt