I try to keep track of what’s going on among those who use LLMs for coding but they all keep linking Steve Yegge and I just can’t take anybody who links to Steve “gas town” Yegge seriously.

He’s the opposite of convincing.

To those who aren’t “AI”-pilled, Steve Yegge on anything related to LLM coding makes about as much sense as the Roko’s Basilisk nonsense. If you want to be convincing to outsiders you need to stop citing what are effectively “AI” catechisms
Steve Yegge is so bad that whenever I want to convince somebody on the fence on ”AI” that the biggest LLM boosters all seem to be having serious mental health episodes, I send them a link to one of his posts. Works every time.

@baldur I hadn't encountered that concept before.

https://en.wikipedia.org/wiki/Roko's_basilisk

Strikes me as being equivalent to the argument that we're living in a simulation.

All of this *is* a religion to the ideas' adherents.

Roko's basilisk - Wikipedia

@mason @baldur The worst part is the music isn't even any good. At least the Church of England gave us people like Stanford, Bairstow, and Elgar. The Church of Slopology gives us *checks notes* "We Are Charlie Kirk" by "Ten Million GPUs".
@baldur I just saw a post referencing the (TIL) vibe maintainer article and I nearly sprained my eyemuscles from the sudden roll
@baldur well, I’ve linked to it seriously a couple of times to illustrate the mental health toll of LLM addiction 😬
@baldur it’s too much even for Armin Ronacher https://lucumr.pocoo.org/2026/1/18/agent-psychosis/
(not that that made him stop using LLMs in anyway apparently…)
Agent Psychosis: Are We Going Insane?

What’s going on with the AI builder community right now?

Armin Ronacher's Thoughts and Writings
@ced well, there also where lots of smokers who freely talked about the dangers of smoking but didn’t stop and they all thought they had good reasons. @baldur
@ced Haha. Just posted pretty much exactly the same thing 😆
@baldur I remember how my mind was blown the first time I read one of those – like “but what is he on ?”
@baldur I cannot figure out Steve Yegge. He can't even figure out himself. He's said some interesting and useful things but also whoa nelly is there a bunch of weird in there too. Definitely not a source to cite without care and context.
@aredridel Yeah, he's become the oddest of ducks these days.
@aredridel @baldur weird isn’t the problem, I like weird. Morally or intellectually inconsistent, that I have trouble with.

@trisweb @baldur Honestly both of those are fine, if good to know when citing. Especially if the moral angle is something that's unsettled and people are casting about for better stances.

I'm starting to think though that a hidden piece of AI discourse is whether people can tolerate epistemically questionable sources well. Lots of people can't, it seems — and it explains why we’re in such a misinformation mess in this world even before LLMs, and now we're seeing the seams in places we used to be able to pretend weren't suspect.

I guess it should come as no surprise to me that people who were drawn to computers as deterministic objects would struggle with the absolute probabilistic buffoonery that LLMs generate, and that people have always had. And those of us drawn to them as communication objects in a time of a sometimes-hostile internet public have a different base sentiment about information. (And not to commit “of course I'm in the sweet spot”, but the people who grew up on the chan-pilled internet after my time, even far _too_ comfortable with hostile information spaces to the point of nihilism about it.)

@aredridel @baldur very true, however I would add that the type of unreliable information coming from LLMs and that of people is a difference of kind. I’m not sure applying methods for how humans work at these stochastic token generators is such a good approach (in fact, it’s kinda one of the big problems rn)

@trisweb @baldur I actually disagree but with a caveat: I think commercial speech resembles LLM output, and for good reason. It's trying to be "normal", “centrist", "inoffensive” (politically correct), broadly appealing, and largely, ignorant of truth. It's quite often trying to create a marketing reality.

LLMs are just even better at this.

The incentives to produce this sort of text are still there.

@trisweb @baldur That said, yeah, if you ask those questions well, they'll lead you different places. "Why is it saying _that_? What's the evidence base?" lead different places. (and the LLM will more or less lie there: it will give you a plausible explanation, but not one that actually explains why it said what it did before that.)
@aredridel @baldur Right. Because it’s a language model 😂 The number of people who don’t understand what the technology actually is under the hood is stunning to me. It’s kinda important to how it works.

@trisweb @aredridel @baldur this is part of my own aversion: that kind of understanding is pulling against the grain of the "magic wand" rhetoric, and a lot of the target market isn't particularly interested in understanding it even before that tension.

I don't see any way out of that trap, but I am also not in a rush to analyze the big picture since it's so clear that the status quo is an entire galaxy away from what equilibrium will look like

@SnoopJ @aredridel @baldur yes. Good way to describe the problem. We really are so far off of equilibrium culturally on the whole subject and its myriad downstream impacts (economic, social, political…).

Step one might just be a sane shared understanding of what it is and how it works, to dispel some of the mythology and half truths. Could also help explain those falsehoods clearly.

A lot of this has parallels in science communication and how difficult it is to combat disinformation there. And I think similarly the solution lies in good old fashioned marketing and ability to communicate and lead and treat people like reasonable humans as opposed to dumb enemies.

I’d like to see a foundation or benevolent project that could maybe take on that work…

@trisweb @SnoopJ @baldur Snapping that together with something else I was reading today: https://skywriter.blue/@eliothiggins.bsky.social/3mitbqzpvhk2x
Page by Eliot Higgins | @eliothiggins.bsky.social

🧵 Democracy feels like it's in a rough state at the moment across the globe, and we hear various explanations, like polarisation, extremism, disinformation, and loss of trust. But what if those explanations are mainly symptoms and we've been trying to treat them rather than the underlying causes? I...

Skywriter
@trisweb @baldur Yeah. But looking at it closer, no wonder. The only people teaching this are all in on AI. There's a social split isolating the communities that actually discuss this well. And the anti-AI crowd is like 2 years behind and prone to omitting so much detail as to be actually wrong.
@baldur Steve Yegge is an industry plant to get everyone to use up all their tokens in 10s
@baldur the bison at the head of the herd beelining for the edge of the cliff
@jplebreton @baldur "how dare you, it's *sane* cow disease"
@baldur @davidgerard we saw someone link that and say "i don't think 'no-AI PRs' is a sustainable strategy" and our head imploded from the singularity of headass that formed in our headspace

@baldur

Steve Yegge is the Axe Body Spray of tech evangelism.

For some strange reason a segment of society believes he is very compelling, but the vast majority of people want to get as far away from him as possible.