I try to keep track of what’s going on among those who use LLMs for coding but they all keep linking Steve Yegge and I just can’t take anybody who links to Steve “gas town” Yegge seriously.

He’s the opposite of convincing.

@baldur I cannot figure out Steve Yegge. He can't even figure out himself. He's said some interesting and useful things but also whoa nelly is there a bunch of weird in there too. Definitely not a source to cite without care and context.
@aredridel @baldur weird isn’t the problem, I like weird. Morally or intellectually inconsistent, that I have trouble with.

@trisweb @baldur Honestly both of those are fine, if good to know when citing. Especially if the moral angle is something that's unsettled and people are casting about for better stances.

I'm starting to think though that a hidden piece of AI discourse is whether people can tolerate epistemically questionable sources well. Lots of people can't, it seems — and it explains why we’re in such a misinformation mess in this world even before LLMs, and now we're seeing the seams in places we used to be able to pretend weren't suspect.

I guess it should come as no surprise to me that people who were drawn to computers as deterministic objects would struggle with the absolute probabilistic buffoonery that LLMs generate, and that people have always had. And those of us drawn to them as communication objects in a time of a sometimes-hostile internet public have a different base sentiment about information. (And not to commit “of course I'm in the sweet spot”, but the people who grew up on the chan-pilled internet after my time, even far _too_ comfortable with hostile information spaces to the point of nihilism about it.)

@aredridel @baldur very true, however I would add that the type of unreliable information coming from LLMs and that of people is a difference of kind. I’m not sure applying methods for how humans work at these stochastic token generators is such a good approach (in fact, it’s kinda one of the big problems rn)

@trisweb @baldur I actually disagree but with a caveat: I think commercial speech resembles LLM output, and for good reason. It's trying to be "normal", “centrist", "inoffensive” (politically correct), broadly appealing, and largely, ignorant of truth. It's quite often trying to create a marketing reality.

LLMs are just even better at this.

The incentives to produce this sort of text are still there.

@trisweb @baldur That said, yeah, if you ask those questions well, they'll lead you different places. "Why is it saying _that_? What's the evidence base?" lead different places. (and the LLM will more or less lie there: it will give you a plausible explanation, but not one that actually explains why it said what it did before that.)
@aredridel @baldur Right. Because it’s a language model 😂 The number of people who don’t understand what the technology actually is under the hood is stunning to me. It’s kinda important to how it works.
@trisweb @baldur Yeah. But looking at it closer, no wonder. The only people teaching this are all in on AI. There's a social split isolating the communities that actually discuss this well. And the anti-AI crowd is like 2 years behind and prone to omitting so much detail as to be actually wrong.