As mentioned before, I hate bringing this up because I have no evidence or expertise here, just a gut feeling. But I just can't help feeling like, aside from everything else aside about LLM chatbots, they're quickly becoming the leaded gasoline of our time.

Something doing real damage to human cognition, but in this diffuse and difficult to measure kind of way.

Many, not nearly all but *many*, folks using this things seem (again, as a gut feeling) to just talk differently after contact with chatbots? I can't even quite put my finger on it, but it scares the shit out of me.

It's not even an argument against chatbots, I have plenty of arguments that are far better substantiated, it's a personal fear about what they're doing.

@xgranade the following is pure speculation based solely on personal experience

so speaking for ourselves, we have, like, hard-mode neurology, yeah? don't get us wrong, we love what we are and wouldn't change it, but we have a predisposition towards paranoia and as kids almost all of our conversations were with ourselves (since humans didn't acknowledge us as one of them), which caused us to get pretty far off in the weeds in terms of what we care about and how we talk about it?

@xgranade like we had this entire personalized jargon which felt normal to us because everyone we talked to (ie. ourselves) understood it, you know?
@xgranade we were very fortunate in that our artistic expression was interacting with computers, which are very rigorous in their demands. use a traditional programming language to tell a computer to do something and you will get nowhere unless you've fully understood what you're asking it to do, so that was a lot of forced practice of our science skills, our ability to test things against measurable reality
@xgranade and then later in life, after transition gave us common ground with humans and interacting with them became an option, we had to learn a lot of specific skills to understand the consensus reality that people live in and kind of funnel everything through the stuff we have in common so that there's, like, a mutually intelligible purpose for it? because otherwise people are just confused?

@xgranade and because we had to learn all this stuff explicitly through trial and error, we're extremely aware of what the skills consist of

and, like... a generative language model is going to NOT require any of that. none of the science, none of the social skill. it will just mirror people's remarks back to them and it will never admit to not understanding and it doesn't behave any differently when people speak total nonsense to it

@xgranade so it feels perfectly clear to us that spending too much time talking to the things would result in atrophy of the trial-and-error parts of social interaction, because people doing that are not exercising that skill but the machine is faking the reward for it anyway

@xgranade again, this is total speculation. just because we can identify a plausible mechanism doesn't make this science; somebody would have to do actual research to validate our guess.

... but KNOWING that is kind of the precise thing at issue, yeah?

@ireneista YUP. But I guess why this is a fear for me is because by the time research does validate or disprove any of these guesses, this shit will have done nearly incalculable harm.
@xgranade right, absolutely. it's why we have personally been avoiding all interaction with the things. we need our brain. we're using it.
@xgranade of course that's an easy decision for us for a variety of reasons, not least that we don't want anything these tools can give us.
@ireneista Yeah, absolutely. It's why I'm careful to not make this shit one of my arguments against LLMs, there's far better and far more substantiated arguments — but it is a personal fear, and that's not nothing, even if fear isn't a good *argument*.

@xgranade @ireneista I’ve disliked LLMs almost from the start — fortunately, I inadvertently inoculated myself against the hype at the very start by triggering bullshit with mundane prompts — but I agree, there’s something from the last year or so, even more so the last 6 months, that’s been especially unnerving.

Like the people who literally cannot function in perfectly ordinary tasks — and who show no signs of this difficulty being a probable and understandable long-term condition/neurodivergence/etc. — without asking a chatbot. The learned helplessness I’m seeing — and I say this as someone who sometimes struggles with this issue myself — is *off the charts*, far beyond what I’ve seen in other technology scenarios.

Or programmers and developers who have gone full speed ahead into “agentic” AI, swearing up and down it’s making them insanely productive — but they often either can’t or won’t tell just what it is they’re producing, except for an ever-increasing number of “agents”. The ones who are clearly producing something other than “more agents” appear to mostly be producing tools to create or organize or orchestrate agents. And the agents are doing… what? Mostly trivial things that could be done with existing automation tech, or cranking out more software to wrangle more agents. The amount and quality of new software in general does not correspond at all to the alleged productivity claims.

Those are just two rather prominent examples. I actively *do not* want to deskill myself to this level or even have a higher risk of it happening.

@dpnash @xgranade yeah, we've seen that too. it's quite worrying to look at.