As mentioned before, I hate bringing this up because I have no evidence or expertise here, just a gut feeling. But I just can't help feeling like, aside from everything else aside about LLM chatbots, they're quickly becoming the leaded gasoline of our time.

Something doing real damage to human cognition, but in this diffuse and difficult to measure kind of way.

Many, not nearly all but *many*, folks using this things seem (again, as a gut feeling) to just talk differently after contact with chatbots? I can't even quite put my finger on it, but it scares the shit out of me.

It's not even an argument against chatbots, I have plenty of arguments that are far better substantiated, it's a personal fear about what they're doing.

@xgranade I see it as well
@xgranade I don't know if it's an effect of the chatbot per se, or a second-order effect of arguing with one's own conscience and inventing strawmen to pull down
@aburka @xgranade I can easily believe it's from prolonged daily exposure to smoothed-over text and learning to speak that way as a form of interface

@SnoopJ @aburka @xgranade

1. I have absolutely seen this kind of—i hate using this term but there's not really any other word for it—"cognitive decline" from many people, and I am collecting a file on documented public instances of it. It's definitely fucking scary. I will say that it is selective, and I don't know why *some* users seem to suffer from it and others don't. I certainly haven't seen a pattern. It seems to be a general pattern of which the infamous "AI psychosis" is a subcategory

@SnoopJ @aburka @xgranade

2. never in my life have I used the phrase "tetraethyl lead" more frequently than in the last 6 months. not even close.

@SnoopJ @aburka @xgranade

3. It's also because of (1.) that my own usage has shrunk to nothing. I think that some of the people I am presently arguing with will end up being "safe" but I don't know which ones, or why. I haven't seen any plausible safety protocols; I do not know how to experiment with it safely. So every time I open up a chat prompt I feel like I'm asking myself "how big of a swig from this flask of luminous radium paint do I feel comfortable drinking in one sitting".

@glyph @aburka @xgranade I think "semantic ablation" is quite a good turn of phrase for it. And agreed.
@SnoopJ @aburka @xgranade that is _incredibly_ disturbing, and, also, accurate
@SnoopJ @aburka @xgranade Flowers for Altman amirite

@glyph @SnoopJ @aburka @xgranade

Does the SCP Foundation have any general material around environmental and occupational infohazards? Because model based conversation entities would seem to fit.

(joking, but only barely so: SCP is satire as much as it is speculative existential horror)

Unless a more helpful approach would be anthropology or sociology: "tools shape users" or a power and consent analysis..

SCP-8196 - SCP Foundation

The SCP Foundation's 'top-secret' archives, declassified for your enjoyment.

The SCP Foundation

@glyph @aburka @xgranade coined last week in this which might have missed you: https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/

(coined to describe the "AI" writing but here I'm using it to describe the bleed of that into the user's writing/speech)

Why AI writing is so generic, boring, and dangerous: Semantic ablation

opinion: The subtractive bias we're ignoring

The Register
@SnoopJ @glyph @xgranade ah thanks, I'll add it to my 100 open tabs of "stuff to read about AI"
@SnoopJ @glyph @aburka Oh, fuck, that makes so much sense. Data processing inequalities rearing their extremely sharp teeth and all.

@SnoopJ @glyph @aburka @xgranade I *really like* the term semantic ablation, so I do not appreciate my brain barging in with 1) a George Carlin bit from "Parental Advisory: Explicit Lyrics" about how we bury meaning in euphemism over time, relevant because of 2) the post-it note my brain's waving at me that reads "semantic ablation is the mechanism, but just say Newspeak"

stupid brain

remembering things

@SnoopJ
Wow.

So, according to this link, AI is like a reverse compression algorithm that keeps redundancy and discards information.
@glyph @aburka @xgranade

@microblogc @glyph @aburka @xgranade I rather prefer how Ted Chiang put it 3 years ago now (!) but since this is attracting attention, just in case anyone of present missed that one when it still smelled of fresh bits:

https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web

ChatGPT Is a Blurry JPEG of the Web

The noted speculative-fiction writer Ted Chiang on OpenAI’s chatbot ChatGPT, which, he says, does little more than paraphrase what’s already on the Internet.

The New Yorker
@microblogc @glyph @aburka @xgranade anyway, the answer to your question is an *emphatic* yes, neural networks can be viewed quite literally as a form of compression, and it is not uncommon that they are *part* of compression algorithms, though this is not how most 'familiar' compression works.
@aburka I have no idea — that scares me too, frankly.