As mentioned before, I hate bringing this up because I have no evidence or expertise here, just a gut feeling. But I just can't help feeling like, aside from everything else aside about LLM chatbots, they're quickly becoming the leaded gasoline of our time.

Something doing real damage to human cognition, but in this diffuse and difficult to measure kind of way.

Many, not nearly all but *many*, folks using this things seem (again, as a gut feeling) to just talk differently after contact with chatbots? I can't even quite put my finger on it, but it scares the shit out of me.

It's not even an argument against chatbots, I have plenty of arguments that are far better substantiated, it's a personal fear about what they're doing.

I've gotten a number of replies and seen a fair bit of discussion elsewhere to the extent that this is a consequence of having an automated yes-man at your beck and call.

I don't think that's wrong, but it's also not what I'm getting at. Yes-men will validate your bad ideas, pushing you towards not losing the criticality required to distinguish good ideas and bad ideas. But what I've casually observed (again as a non-expert) is people losing the ability to express ideas *at all*.

Someone yes-manned to hell might make a bad movie because no one is around to tell them that the idea for that movie sucks. We've definitely seen that in any number of walks of life, but I suspect (as a non-expert making observations entirely devoid of rigor) that we're seeing something different and significantly worse still.

@xgranade LLMs are lending a lot of credence to the idea that consciousness is a social phenomena. We learn not only what to think, but also *how to think at all* from those around us.

It does not surprise me even slightly that mirror neurons attempting to mirror an empty mask degrades cognition.

----

I am not a professional, but I am an expert in philosophy and philosophy of mind. I have degrees to prove it, for whatever they're worth (not as much as one might hope).