I try to keep track of what’s going on among those who use LLMs for coding but they all keep linking Steve Yegge and I just can’t take anybody who links to Steve “gas town” Yegge seriously.
He’s the opposite of convincing.
I try to keep track of what’s going on among those who use LLMs for coding but they all keep linking Steve Yegge and I just can’t take anybody who links to Steve “gas town” Yegge seriously.
He’s the opposite of convincing.
@trisweb @baldur Honestly both of those are fine, if good to know when citing. Especially if the moral angle is something that's unsettled and people are casting about for better stances.
I'm starting to think though that a hidden piece of AI discourse is whether people can tolerate epistemically questionable sources well. Lots of people can't, it seems — and it explains why we’re in such a misinformation mess in this world even before LLMs, and now we're seeing the seams in places we used to be able to pretend weren't suspect.
I guess it should come as no surprise to me that people who were drawn to computers as deterministic objects would struggle with the absolute probabilistic buffoonery that LLMs generate, and that people have always had. And those of us drawn to them as communication objects in a time of a sometimes-hostile internet public have a different base sentiment about information. (And not to commit “of course I'm in the sweet spot”, but the people who grew up on the chan-pilled internet after my time, even far _too_ comfortable with hostile information spaces to the point of nihilism about it.)
@trisweb @baldur I actually disagree but with a caveat: I think commercial speech resembles LLM output, and for good reason. It's trying to be "normal", “centrist", "inoffensive” (politically correct), broadly appealing, and largely, ignorant of truth. It's quite often trying to create a marketing reality.
LLMs are just even better at this.
The incentives to produce this sort of text are still there.