As mentioned before, I hate bringing this up because I have no evidence or expertise here, just a gut feeling. But I just can't help feeling like, aside from everything else aside about LLM chatbots, they're quickly becoming the leaded gasoline of our time.

Something doing real damage to human cognition, but in this diffuse and difficult to measure kind of way.

Many, not nearly all but *many*, folks using this things seem (again, as a gut feeling) to just talk differently after contact with chatbots? I can't even quite put my finger on it, but it scares the shit out of me.

It's not even an argument against chatbots, I have plenty of arguments that are far better substantiated, it's a personal fear about what they're doing.

@xgranade the following is pure speculation based solely on personal experience

so speaking for ourselves, we have, like, hard-mode neurology, yeah? don't get us wrong, we love what we are and wouldn't change it, but we have a predisposition towards paranoia and as kids almost all of our conversations were with ourselves (since humans didn't acknowledge us as one of them), which caused us to get pretty far off in the weeds in terms of what we care about and how we talk about it?

@xgranade like we had this entire personalized jargon which felt normal to us because everyone we talked to (ie. ourselves) understood it, you know?
@xgranade we were very fortunate in that our artistic expression was interacting with computers, which are very rigorous in their demands. use a traditional programming language to tell a computer to do something and you will get nowhere unless you've fully understood what you're asking it to do, so that was a lot of forced practice of our science skills, our ability to test things against measurable reality
@xgranade and then later in life, after transition gave us common ground with humans and interacting with them became an option, we had to learn a lot of specific skills to understand the consensus reality that people live in and kind of funnel everything through the stuff we have in common so that there's, like, a mutually intelligible purpose for it? because otherwise people are just confused?

@xgranade and because we had to learn all this stuff explicitly through trial and error, we're extremely aware of what the skills consist of

and, like... a generative language model is going to NOT require any of that. none of the science, none of the social skill. it will just mirror people's remarks back to them and it will never admit to not understanding and it doesn't behave any differently when people speak total nonsense to it

@xgranade so it feels perfectly clear to us that spending too much time talking to the things would result in atrophy of the trial-and-error parts of social interaction, because people doing that are not exercising that skill but the machine is faking the reward for it anyway

@xgranade again, this is total speculation. just because we can identify a plausible mechanism doesn't make this science; somebody would have to do actual research to validate our guess.

... but KNOWING that is kind of the precise thing at issue, yeah?

@ireneista YUP. But I guess why this is a fear for me is because by the time research does validate or disprove any of these guesses, this shit will have done nearly incalculable harm.
@xgranade right, absolutely. it's why we have personally been avoiding all interaction with the things. we need our brain. we're using it.
@xgranade of course that's an easy decision for us for a variety of reasons, not least that we don't want anything these tools can give us.
@ireneista Yeah, absolutely. It's why I'm careful to not make this shit one of my arguments against LLMs, there's far better and far more substantiated arguments — but it is a personal fear, and that's not nothing, even if fear isn't a good *argument*.
@xgranade yeah. fear is delivering an important message - about what matters to you, what you have to lose
@ireneista Honestly? Friends. I have multiple friends who have zero interest in any AI products, but who don't resist when they get added in forced updates. What will happen to them?
@xgranade that's our biggest concern, too. we've known people who use the things for therapy, and... well, that terrifies us for a long list of reasons.
@ireneista Yeah, absolutely. But even short of that, the people who would never click the "ask AI" button on purpose are having prompts shoved on them anyway. Fuck, *I* misclicked and hit an "ask AI" button by accident the other day!

@xgranade also, every general-knowledge web search we've done in the last few months has returned almost entirely machine-generated results

(which do not disclose their origin)

@ireneista I run my own selfhost instance of a metasearch engine, configured to try and avoid that very problem, with only limited success.

@xgranade @ireneista I’ve disliked LLMs almost from the start — fortunately, I inadvertently inoculated myself against the hype at the very start by triggering bullshit with mundane prompts — but I agree, there’s something from the last year or so, even more so the last 6 months, that’s been especially unnerving.

Like the people who literally cannot function in perfectly ordinary tasks — and who show no signs of this difficulty being a probable and understandable long-term condition/neurodivergence/etc. — without asking a chatbot. The learned helplessness I’m seeing — and I say this as someone who sometimes struggles with this issue myself — is *off the charts*, far beyond what I’ve seen in other technology scenarios.

Or programmers and developers who have gone full speed ahead into “agentic” AI, swearing up and down it’s making them insanely productive — but they often either can’t or won’t tell just what it is they’re producing, except for an ever-increasing number of “agents”. The ones who are clearly producing something other than “more agents” appear to mostly be producing tools to create or organize or orchestrate agents. And the agents are doing… what? Mostly trivial things that could be done with existing automation tech, or cranking out more software to wrangle more agents. The amount and quality of new software in general does not correspond at all to the alleged productivity claims.

Those are just two rather prominent examples. I actively *do not* want to deskill myself to this level or even have a higher risk of it happening.

@dpnash @xgranade yeah, we've seen that too. it's quite worrying to look at.

@xgranade it's a stark contrast though to the way we've learned new tools throughout our life, which has always started with playing around with them. in this case we're avoiding the play.

we're confident that's the right move (we wouldn't play with a live ebola virus either), but it is definitely a decision that we felt the need to think through carefully.

@ireneista Yeah, no, I can't think of any other technology where hardcore abstinence has been both my gut and reasoned response. Even with cryptocurrency, I briefly got into before reasoning my way to "oh wait, this sucks actually" (and even now, with the caveat that for some people oppressed out of the modern financial system, it's the only option no matter how much it sucks).

But LLMs are a hard fucking pass.

@ireneista (Full disclosure: I have used ChatGPT a few times for the explicit and narrowly defined purpose of better understanding the thing I'm critiquing. But that is very different from experimenting for the purpose of learning to *use* the tool.)
@xgranade @ireneista I've done the same. The failure rate generative "AI" has, in use cases that matter to me, is high enough I have been genuinely surprised at the number of people who find it useful in more than one or two very specific niche cases.

@xgranade like, we did play around with GPT-3 briefly when that was the latest thing, and that did tell us what we feel we need to know about how it works.

we do read occasional research papers on new developments with these things, which is why we feel comfortable saying there haven't been any recent innovations which would merit revisiting it.

@xgranade @ireneista

This is essentially what my argument has been, for decades, about the effect rewiring brains for operating personal automobiles has had on society. Entire populations trained in quickly evaluating information for rapid dismissal, because dwelling on any one thing for even microseconds too long, at those speeds, can get you and others killed.

Which habit of processing cannot help but be transferred to other domains, where there is no life-or-death cost of not dismissing information rapidly, but neither is there any nearly as determinative countervailing consequence of not slowing down those split second dismissals.

With regard to interfacing with the extrusion-ends of LLMs, this represents the culmination of a process of indelibility that Socrates was already complaining about, atrophying capacities that are not exercised by reading static text.

To wit, "consensus reality that people live in" was already a result of a media machine of canonical texts (media as in mediums, not institutions), this desiring machine not faking, as such, but nonetheless undergirding, thus rewarding, social interaction of shibboleths.

All LLMs have done is reify this absence of trial-and-error dialectic. The consensus zeitgiest (fourth estate), existing only to replicate itself through the bodies of humans, having escaped even the containment of citation.

@beadsland @xgranade that's an interesting line of reasoning. it has surface plausibility, though that focus on instant decisions also does kind of seem like a thing that would be self-reinforcing once it exists, even if the original pressure were removed.

@ireneista @xgranade

Habits, as a rule, once established, and insofar as they align with one's sense of self (here, being a socially independent person, liberated by being able to drive competently), are self-reinforcing. This is, at a fundamental level, what habits are.

Instant dismissal of information would not be exempt from this rule of habituation, even without considering the compounding recursion that self-assessment of decision-making, itself, implicates non-rapid dismissal of information about one's own decision making.

So yeah, removing the original pressure resolves nothing absent conscious effort to change the habit. At least as intentional as the conscious effort that went into developing the habit in the first place.

As someone who never learned to drive, never wanted to learn to drive, who bailed on pressure to learn to drive after one lesson wherein myself was told we had been almost side-swiped by a truck myself was oblivious to even being in the parking lot with us, my experience of interacting with people who drive is not dissimilar to OP's experience of people who use LLMs.

They talk differently. They think differently. Heck they even relate to physical space and geography and the passage of time differently. All in a manner that speaks to a consensus reality myself am not, and really would prefer never to be, party to.

So too my experience of folk raised within canonicity, which due to my somewhat unconventional movement through K-12 education, largely missed me.

@beadsland that’s a really interesting theory that I (non-driver for over 20yrs, complex reasons) have never thought about. Don’t want to hijack a very interesting AI convo, but I will mull over it. Slowly. Thanks.
@ireneista Yeah, no, that makes a lot of sense. I tend to think of it as what happens when someone overfits to noise, but I readily admit to having precisely no expertise here.
@xgranade that seems like a reasonable way to model it as well