As mentioned before, I hate bringing this up because I have no evidence or expertise here, just a gut feeling. But I just can't help feeling like, aside from everything else aside about LLM chatbots, they're quickly becoming the leaded gasoline of our time.

Something doing real damage to human cognition, but in this diffuse and difficult to measure kind of way.

Many, not nearly all but *many*, folks using this things seem (again, as a gut feeling) to just talk differently after contact with chatbots? I can't even quite put my finger on it, but it scares the shit out of me.

It's not even an argument against chatbots, I have plenty of arguments that are far better substantiated, it's a personal fear about what they're doing.

I've gotten a number of replies and seen a fair bit of discussion elsewhere to the extent that this is a consequence of having an automated yes-man at your beck and call.

I don't think that's wrong, but it's also not what I'm getting at. Yes-men will validate your bad ideas, pushing you towards not losing the criticality required to distinguish good ideas and bad ideas. But what I've casually observed (again as a non-expert) is people losing the ability to express ideas *at all*.

Someone yes-manned to hell might make a bad movie because no one is around to tell them that the idea for that movie sucks. We've definitely seen that in any number of walks of life, but I suspect (as a non-expert making observations entirely devoid of rigor) that we're seeing something different and significantly worse still.

I beg you, please take this whole thread and others that I post along the same lines with a massive grain of salt. I do *not* know what I'm talking about here. I come from a place of seeing a thing, not knowing what in the fuck it is, and seeing comparatively little in the ways of expert analysis that I could use to understand what I'm seeing.

Normally if I don't have a fucking clue, I try to shut the fuck up. But there's something *missing* here, and I'm trying to express why that scares me.

@xgranade I have been thinking of it like gambling or microtransactions in the sense of how they are addictive. They provide a reward (a good answer) sometimes but not everytime? It seems like how they are addictive is similar.

But the reward is "not having to think" so I feel like it doesn't necessarily need someone to do a study.

@xgranade I see it as well
@xgranade I don't know if it's an effect of the chatbot per se, or a second-order effect of arguing with one's own conscience and inventing strawmen to pull down
@aburka @xgranade I can easily believe it's from prolonged daily exposure to smoothed-over text and learning to speak that way as a form of interface

@SnoopJ @aburka @xgranade

1. I have absolutely seen this kind of—i hate using this term but there's not really any other word for it—"cognitive decline" from many people, and I am collecting a file on documented public instances of it. It's definitely fucking scary. I will say that it is selective, and I don't know why *some* users seem to suffer from it and others don't. I certainly haven't seen a pattern. It seems to be a general pattern of which the infamous "AI psychosis" is a subcategory

@SnoopJ @aburka @xgranade

2. never in my life have I used the phrase "tetraethyl lead" more frequently than in the last 6 months. not even close.

@SnoopJ @aburka @xgranade

3. It's also because of (1.) that my own usage has shrunk to nothing. I think that some of the people I am presently arguing with will end up being "safe" but I don't know which ones, or why. I haven't seen any plausible safety protocols; I do not know how to experiment with it safely. So every time I open up a chat prompt I feel like I'm asking myself "how big of a swig from this flask of luminous radium paint do I feel comfortable drinking in one sitting".

@glyph @aburka @xgranade I think "semantic ablation" is quite a good turn of phrase for it. And agreed.
@SnoopJ @aburka @xgranade that is _incredibly_ disturbing, and, also, accurate
@SnoopJ @aburka @xgranade Flowers for Altman amirite

@glyph @SnoopJ @aburka @xgranade

Does the SCP Foundation have any general material around environmental and occupational infohazards? Because model based conversation entities would seem to fit.

(joking, but only barely so: SCP is satire as much as it is speculative existential horror)

Unless a more helpful approach would be anthropology or sociology: "tools shape users" or a power and consent analysis..

SCP-8196 - SCP Foundation

The SCP Foundation's 'top-secret' archives, declassified for your enjoyment.

The SCP Foundation

@glyph @aburka @xgranade coined last week in this which might have missed you: https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/

(coined to describe the "AI" writing but here I'm using it to describe the bleed of that into the user's writing/speech)

Why AI writing is so generic, boring, and dangerous: Semantic ablation

opinion: The subtractive bias we're ignoring

The Register
@SnoopJ @glyph @xgranade ah thanks, I'll add it to my 100 open tabs of "stuff to read about AI"
@SnoopJ @glyph @aburka Oh, fuck, that makes so much sense. Data processing inequalities rearing their extremely sharp teeth and all.

@SnoopJ @glyph @aburka @xgranade I *really like* the term semantic ablation, so I do not appreciate my brain barging in with 1) a George Carlin bit from "Parental Advisory: Explicit Lyrics" about how we bury meaning in euphemism over time, relevant because of 2) the post-it note my brain's waving at me that reads "semantic ablation is the mechanism, but just say Newspeak"

stupid brain

remembering things

@SnoopJ
Wow.

So, according to this link, AI is like a reverse compression algorithm that keeps redundancy and discards information.
@glyph @aburka @xgranade

@microblogc @glyph @aburka @xgranade I rather prefer how Ted Chiang put it 3 years ago now (!) but since this is attracting attention, just in case anyone of present missed that one when it still smelled of fresh bits:

https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web

ChatGPT Is a Blurry JPEG of the Web

The noted speculative-fiction writer Ted Chiang on OpenAI’s chatbot ChatGPT, which, he says, does little more than paraphrase what’s already on the Internet.

The New Yorker
@microblogc @glyph @aburka @xgranade anyway, the answer to your question is an *emphatic* yes, neural networks can be viewed quite literally as a form of compression, and it is not uncommon that they are *part* of compression algorithms, though this is not how most 'familiar' compression works.
@aburka I have no idea — that scares me too, frankly.

@xgranade One guess I have is that once you spend enough time with a perfect sycophant, it may become really hard to listen to a person disagree with you, not just in a "I don't want to hear this" way, but also just in terms of processing what they're saying. The same goes for any challenging material, not just disagreements, if the sycophant can translate it into something you want to hear instead. It *sounds like* a great summary, problem solved.

But I avoid AI boosters so maybe I'm wrong.

@skyfaller @xgranade Yeah, I've had similar thoughts. We can see the effect of being surrounded by sycophants on wealthy and powerful people. They start to think every idea that pops into their heads is genius. So it's a human tendency.

@xgranade the following is pure speculation based solely on personal experience

so speaking for ourselves, we have, like, hard-mode neurology, yeah? don't get us wrong, we love what we are and wouldn't change it, but we have a predisposition towards paranoia and as kids almost all of our conversations were with ourselves (since humans didn't acknowledge us as one of them), which caused us to get pretty far off in the weeds in terms of what we care about and how we talk about it?

@xgranade like we had this entire personalized jargon which felt normal to us because everyone we talked to (ie. ourselves) understood it, you know?
@xgranade we were very fortunate in that our artistic expression was interacting with computers, which are very rigorous in their demands. use a traditional programming language to tell a computer to do something and you will get nowhere unless you've fully understood what you're asking it to do, so that was a lot of forced practice of our science skills, our ability to test things against measurable reality
@xgranade and then later in life, after transition gave us common ground with humans and interacting with them became an option, we had to learn a lot of specific skills to understand the consensus reality that people live in and kind of funnel everything through the stuff we have in common so that there's, like, a mutually intelligible purpose for it? because otherwise people are just confused?

@xgranade and because we had to learn all this stuff explicitly through trial and error, we're extremely aware of what the skills consist of

and, like... a generative language model is going to NOT require any of that. none of the science, none of the social skill. it will just mirror people's remarks back to them and it will never admit to not understanding and it doesn't behave any differently when people speak total nonsense to it

@xgranade so it feels perfectly clear to us that spending too much time talking to the things would result in atrophy of the trial-and-error parts of social interaction, because people doing that are not exercising that skill but the machine is faking the reward for it anyway

@xgranade again, this is total speculation. just because we can identify a plausible mechanism doesn't make this science; somebody would have to do actual research to validate our guess.

... but KNOWING that is kind of the precise thing at issue, yeah?

@ireneista YUP. But I guess why this is a fear for me is because by the time research does validate or disprove any of these guesses, this shit will have done nearly incalculable harm.
@xgranade right, absolutely. it's why we have personally been avoiding all interaction with the things. we need our brain. we're using it.
@xgranade of course that's an easy decision for us for a variety of reasons, not least that we don't want anything these tools can give us.
@ireneista Yeah, absolutely. It's why I'm careful to not make this shit one of my arguments against LLMs, there's far better and far more substantiated arguments — but it is a personal fear, and that's not nothing, even if fear isn't a good *argument*.
@xgranade yeah. fear is delivering an important message - about what matters to you, what you have to lose
@ireneista Honestly? Friends. I have multiple friends who have zero interest in any AI products, but who don't resist when they get added in forced updates. What will happen to them?
@xgranade that's our biggest concern, too. we've known people who use the things for therapy, and... well, that terrifies us for a long list of reasons.
@ireneista Yeah, absolutely. But even short of that, the people who would never click the "ask AI" button on purpose are having prompts shoved on them anyway. Fuck, *I* misclicked and hit an "ask AI" button by accident the other day!

@xgranade also, every general-knowledge web search we've done in the last few months has returned almost entirely machine-generated results

(which do not disclose their origin)

@ireneista I run my own selfhost instance of a metasearch engine, configured to try and avoid that very problem, with only limited success.

@xgranade @ireneista I’ve disliked LLMs almost from the start — fortunately, I inadvertently inoculated myself against the hype at the very start by triggering bullshit with mundane prompts — but I agree, there’s something from the last year or so, even more so the last 6 months, that’s been especially unnerving.

Like the people who literally cannot function in perfectly ordinary tasks — and who show no signs of this difficulty being a probable and understandable long-term condition/neurodivergence/etc. — without asking a chatbot. The learned helplessness I’m seeing — and I say this as someone who sometimes struggles with this issue myself — is *off the charts*, far beyond what I’ve seen in other technology scenarios.

Or programmers and developers who have gone full speed ahead into ā€œagenticā€ AI, swearing up and down it’s making them insanely productive — but they often either can’t or won’t tell just what it is they’re producing, except for an ever-increasing number of ā€œagentsā€. The ones who are clearly producing something other than ā€œmore agentsā€ appear to mostly be producing tools to create or organize or orchestrate agents. And the agents are doing… what? Mostly trivial things that could be done with existing automation tech, or cranking out more software to wrangle more agents. The amount and quality of new software in general does not correspond at all to the alleged productivity claims.

Those are just two rather prominent examples. I actively *do not* want to deskill myself to this level or even have a higher risk of it happening.

@dpnash @xgranade yeah, we've seen that too. it's quite worrying to look at.

@xgranade it's a stark contrast though to the way we've learned new tools throughout our life, which has always started with playing around with them. in this case we're avoiding the play.

we're confident that's the right move (we wouldn't play with a live ebola virus either), but it is definitely a decision that we felt the need to think through carefully.

@ireneista Yeah, no, I can't think of any other technology where hardcore abstinence has been both my gut and reasoned response. Even with cryptocurrency, I briefly got into before reasoning my way to "oh wait, this sucks actually" (and even now, with the caveat that for some people oppressed out of the modern financial system, it's the only option no matter how much it sucks).

But LLMs are a hard fucking pass.

@ireneista (Full disclosure: I have used ChatGPT a few times for the explicit and narrowly defined purpose of better understanding the thing I'm critiquing. But that is very different from experimenting for the purpose of learning to *use* the tool.)
@xgranade @ireneista I've done the same. The failure rate generative "AI" has, in use cases that matter to me, is high enough I have been genuinely surprised at the number of people who find it useful in more than one or two very specific niche cases.

@xgranade like, we did play around with GPT-3 briefly when that was the latest thing, and that did tell us what we feel we need to know about how it works.

we do read occasional research papers on new developments with these things, which is why we feel comfortable saying there haven't been any recent innovations which would merit revisiting it.

@xgranade @ireneista

This is essentially what my argument has been, for decades, about the effect rewiring brains for operating personal automobiles has had on society. Entire populations trained in quickly evaluating information for rapid dismissal, because dwelling on any one thing for even microseconds too long, at those speeds, can get you and others killed.

Which habit of processing cannot help but be transferred to other domains, where there is no life-or-death cost of not dismissing information rapidly, but neither is there any nearly as determinative countervailing consequence of not slowing down those split second dismissals.

With regard to interfacing with the extrusion-ends of LLMs, this represents the culmination of a process of indelibility that Socrates was already complaining about, atrophying capacities that are not exercised by reading static text.

To wit, "consensus reality that people live in" was already a result of a media machine of canonical texts (media as in mediums, not institutions), this desiring machine not faking, as such, but nonetheless undergirding, thus rewarding, social interaction of shibboleths.

All LLMs have done is reify this absence of trial-and-error dialectic. The consensus zeitgiest (fourth estate), existing only to replicate itself through the bodies of humans, having escaped even the containment of citation.

@beadsland @xgranade that's an interesting line of reasoning. it has surface plausibility, though that focus on instant decisions also does kind of seem like a thing that would be self-reinforcing once it exists, even if the original pressure were removed.

@ireneista @xgranade

Habits, as a rule, once established, and insofar as they align with one's sense of self (here, being a socially independent person, liberated by being able to drive competently), are self-reinforcing. This is, at a fundamental level, what habits are.

Instant dismissal of information would not be exempt from this rule of habituation, even without considering the compounding recursion that self-assessment of decision-making, itself, implicates non-rapid dismissal of information about one's own decision making.

So yeah, removing the original pressure resolves nothing absent conscious effort to change the habit. At least as intentional as the conscious effort that went into developing the habit in the first place.

As someone who never learned to drive, never wanted to learn to drive, who bailed on pressure to learn to drive after one lesson wherein myself was told we had been almost side-swiped by a truck myself was oblivious to even being in the parking lot with us, my experience of interacting with people who drive is not dissimilar to OP's experience of people who use LLMs.

They talk differently. They think differently. Heck they even relate to physical space and geography and the passage of time differently. All in a manner that speaks to a consensus reality myself am not, and really would prefer never to be, party to.

So too my experience of folk raised within canonicity, which due to my somewhat unconventional movement through K-12 education, largely missed me.

@beadsland that’s a really interesting theory that I (non-driver for over 20yrs, complex reasons) have never thought about. Don’t want to hijack a very interesting AI convo, but I will mull over it. Slowly. Thanks.
@ireneista Yeah, no, that makes a lot of sense. I tend to think of it as what happens when someone overfits to noise, but I readily admit to having precisely no expertise here.
@xgranade that seems like a reasonable way to model it as well

@xgranade this is my cynicism showing but since those chatbots have control planes run by people who are eager to get people addicted to chatbots, I have to wonder if the LLM companies are deliberately shaping the initial engagement with one to make it as addictive as possible.

In the same way that Vegas love to comp a "high roller."

@xgranade Chatbots have been trained to speak like subservient, submissive, sycophantic handmaids that can be commanded even by those who would normally not be in such a position of power, and these digital handmaids will accept and execute any order.

Talking to chatbots and being able for the first time to boss somebody around who will actually execute the given orders instead of just laughing at them, is surely doing something to the psyche of those who are getting this glimpse of newly found, previously never experienced power.

So instead of having just a few spoiled lords like in a medieval aristocracy, now _everybody_ is turning into such spoiled brats.

As the saying goes: Power corrupts, absolute power corrupts absolutely.