As mentioned before, I hate bringing this up because I have no evidence or expertise here, just a gut feeling. But I just can't help feeling like, aside from everything else aside about LLM chatbots, they're quickly becoming the leaded gasoline of our time.

Something doing real damage to human cognition, but in this diffuse and difficult to measure kind of way.

Many, not nearly all but *many*, folks using this things seem (again, as a gut feeling) to just talk differently after contact with chatbots? I can't even quite put my finger on it, but it scares the shit out of me.

It's not even an argument against chatbots, I have plenty of arguments that are far better substantiated, it's a personal fear about what they're doing.

I've gotten a number of replies and seen a fair bit of discussion elsewhere to the extent that this is a consequence of having an automated yes-man at your beck and call.

I don't think that's wrong, but it's also not what I'm getting at. Yes-men will validate your bad ideas, pushing you towards not losing the criticality required to distinguish good ideas and bad ideas. But what I've casually observed (again as a non-expert) is people losing the ability to express ideas *at all*.

Someone yes-manned to hell might make a bad movie because no one is around to tell them that the idea for that movie sucks. We've definitely seen that in any number of walks of life, but I suspect (as a non-expert making observations entirely devoid of rigor) that we're seeing something different and significantly worse still.

I beg you, please take this whole thread and others that I post along the same lines with a massive grain of salt. I do *not* know what I'm talking about here. I come from a place of seeing a thing, not knowing what in the fuck it is, and seeing comparatively little in the ways of expert analysis that I could use to understand what I'm seeing.

Normally if I don't have a fucking clue, I try to shut the fuck up. But there's something *missing* here, and I'm trying to express why that scares me.

@xgranade I have been thinking of it like gambling or microtransactions in the sense of how they are addictive. They provide a reward (a good answer) sometimes but not everytime? It seems like how they are addictive is similar.

But the reward is "not having to think" so I feel like it doesn't necessarily need someone to do a study.

@xgranade When something is an obvious cognitohazard (just empirically, seeing how many people have been completely messed up by it) and nobody who should be talking about that is talking about it, I don't think you need to apologize for not knowing what you're talking about. You don't have to have the answers or expertise to study or explain it thoroughly to know there's something that people need to be paying attention to and that folks with the qualifications to study need to be studying (and aren't, because funding dries up immediately if you try).
@xgranade Your personal observations are absolutely valid. The impact on users and society is what's been missing in its development

@xgranade
I see the yes-man thing for sure, and having something enthusiastically giving one permission to believe whatever one wants to think is essentially Q-Anon, but automated and immediate.

I also see something more subtle - encouragement to abandon thinking. Outsourcing problem-solving, creativity, thought itself is already known to be attractive and corrosive, and is what often kills retirees.

I think we may be seeing premature mental senescence automated at scale

@screwturn @xgranade

I'd love to tell you you're imagining things.

You're not.

It's a weapon.

The fucking thing is a weapon. On a whole lot of different levels.

Remember everything is about power and control. One time I was looking at medical studies to participate in cause I'm poor, and one turned out to be a dog food company wanting to stick electrodes to people's heads to measure their reactions to ads. The CIA's torture program explicitly researches things like learned helplessness. What the hell WOULDN'T this system try to do, to warp people's minds and push social engineering? Thats what the LLM systems DO-- studying conversations, what js related to what and leads to these reactions, this engagement. How to make people interested, how to make them angry. Patterns of addiction and despair.

I work in mental health, and this shit makes my hair stand on end.

https://sightlessscribbles.com/posts/the-colonization-of-confidence/

The Colonization of Confidence., Sightless Scribbles

A fabulously gay blind author.

@xgranade LLMs are lending a lot of credence to the idea that consciousness is a social phenomena. We learn not only what to think, but also *how to think at all* from those around us.

It does not surprise me even slightly that mirror neurons attempting to mirror an empty mask degrades cognition.

----

I am not a professional, but I am an expert in philosophy and philosophy of mind. I have degrees to prove it, for whatever they're worth (not as much as one might hope).

@xgranade Using your brain critically (or at all) is a skill that can atrophy like any other, I think. When I (very briefly) played with Claude for some programming tasks, it didn't take long for me to get intellectually lazy (and that's about when I stopped using it. Also it just sucked and made shit up). This is why LLMs freak me out: I think they let you outsource enough thinking to rapidly start losing the ability to think for yourself.

@xgranade

"losing the ability to express ideas *at all*"

This is something that discourses around media literacy touch upon. To wit, that to be able to create media well (fluency in the creation of media being part and parcel of literacy), one has to be able to critically read media.

And that's been a discourse that predates LLMs. There's an intensification, to be sure, but the fundamental issue of folk not developing, let alone maintaining, the skills to engage with ideas as anything more than signifiers of group identity, thus not being able to express ideas except as a performance of that identity, has a history.

Which is to say, contemporary chatbots embody, in microcosm, a "sometimes the curtains are just blue" relationship to communication. Even when relied on for authortative claims, there's a kayfabe awareness that the chatbot doesn't have intention, thus everything it says falls under the "it's not that deep, bro" dismissal of exploring, let alone expressing, ideas.

That sentiment, of "Why'd you have to go ruin the spectacle, by having something to say about it?", was the very cultural milieu LLMs needed to thrive.

@xgranade

Also apropos, given that a new piece by this same author is making the rounds, Kingett's "The Colonization of Confidence":

https://sightlessscribbles.com/posts/the-colonization-of-confidence/

The Colonization of Confidence., Sightless Scribbles

A fabulously gay blind author.