As mentioned before, I hate bringing this up because I have no evidence or expertise here, just a gut feeling. But I just can't help feeling like, aside from everything else aside about LLM chatbots, they're quickly becoming the leaded gasoline of our time.

Something doing real damage to human cognition, but in this diffuse and difficult to measure kind of way.

Many, not nearly all but *many*, folks using this things seem (again, as a gut feeling) to just talk differently after contact with chatbots? I can't even quite put my finger on it, but it scares the shit out of me.

It's not even an argument against chatbots, I have plenty of arguments that are far better substantiated, it's a personal fear about what they're doing.

@xgranade I see it as well
@xgranade I don't know if it's an effect of the chatbot per se, or a second-order effect of arguing with one's own conscience and inventing strawmen to pull down
@aburka @xgranade I can easily believe it's from prolonged daily exposure to smoothed-over text and learning to speak that way as a form of interface

@SnoopJ @aburka @xgranade

1. I have absolutely seen this kind of—i hate using this term but there's not really any other word for it—"cognitive decline" from many people, and I am collecting a file on documented public instances of it. It's definitely fucking scary. I will say that it is selective, and I don't know why *some* users seem to suffer from it and others don't. I certainly haven't seen a pattern. It seems to be a general pattern of which the infamous "AI psychosis" is a subcategory

@glyph @aburka @xgranade I think "semantic ablation" is quite a good turn of phrase for it. And agreed.
@SnoopJ @aburka @xgranade that is _incredibly_ disturbing, and, also, accurate

@glyph @aburka @xgranade coined last week in this which might have missed you: https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/

(coined to describe the "AI" writing but here I'm using it to describe the bleed of that into the user's writing/speech)

Why AI writing is so generic, boring, and dangerous: Semantic ablation

opinion: The subtractive bias we're ignoring

The Register

@SnoopJ
Wow.

So, according to this link, AI is like a reverse compression algorithm that keeps redundancy and discards information.
@glyph @aburka @xgranade

@microblogc @glyph @aburka @xgranade I rather prefer how Ted Chiang put it 3 years ago now (!) but since this is attracting attention, just in case anyone of present missed that one when it still smelled of fresh bits:

https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web

ChatGPT Is a Blurry JPEG of the Web

The noted speculative-fiction writer Ted Chiang on OpenAI’s chatbot ChatGPT, which, he says, does little more than paraphrase what’s already on the Internet.

The New Yorker
@SnoopJ @microblogc @glyph @aburka @xgranade a great read, thank you for the link.