As mentioned before, I hate bringing this up because I have no evidence or expertise here, just a gut feeling. But I just can't help feeling like, aside from everything else aside about LLM chatbots, they're quickly becoming the leaded gasoline of our time.

Something doing real damage to human cognition, but in this diffuse and difficult to measure kind of way.

Many, not nearly all but *many*, folks using this things seem (again, as a gut feeling) to just talk differently after contact with chatbots? I can't even quite put my finger on it, but it scares the shit out of me.

It's not even an argument against chatbots, I have plenty of arguments that are far better substantiated, it's a personal fear about what they're doing.

I've gotten a number of replies and seen a fair bit of discussion elsewhere to the extent that this is a consequence of having an automated yes-man at your beck and call.

I don't think that's wrong, but it's also not what I'm getting at. Yes-men will validate your bad ideas, pushing you towards not losing the criticality required to distinguish good ideas and bad ideas. But what I've casually observed (again as a non-expert) is people losing the ability to express ideas *at all*.

@xgranade

"losing the ability to express ideas *at all*"

This is something that discourses around media literacy touch upon. To wit, that to be able to create media well (fluency in the creation of media being part and parcel of literacy), one has to be able to critically read media.

And that's been a discourse that predates LLMs. There's an intensification, to be sure, but the fundamental issue of folk not developing, let alone maintaining, the skills to engage with ideas as anything more than signifiers of group identity, thus not being able to express ideas except as a performance of that identity, has a history.

Which is to say, contemporary chatbots embody, in microcosm, a "sometimes the curtains are just blue" relationship to communication. Even when relied on for authortative claims, there's a kayfabe awareness that the chatbot doesn't have intention, thus everything it says falls under the "it's not that deep, bro" dismissal of exploring, let alone expressing, ideas.

That sentiment, of "Why'd you have to go ruin the spectacle, by having something to say about it?", was the very cultural milieu LLMs needed to thrive.

@xgranade

Also apropos, given that a new piece by this same author is making the rounds, Kingett's "The Colonization of Confidence":

https://sightlessscribbles.com/posts/the-colonization-of-confidence/

The Colonization of Confidence., Sightless Scribbles

A fabulously gay blind author.