As mentioned before, I hate bringing this up because I have no evidence or expertise here, just a gut feeling. But I just can't help feeling like, aside from everything else aside about LLM chatbots, they're quickly becoming the leaded gasoline of our time.

Something doing real damage to human cognition, but in this diffuse and difficult to measure kind of way.

Many, not nearly all but *many*, folks using this things seem (again, as a gut feeling) to just talk differently after contact with chatbots? I can't even quite put my finger on it, but it scares the shit out of me.

It's not even an argument against chatbots, I have plenty of arguments that are far better substantiated, it's a personal fear about what they're doing.

@xgranade the following is pure speculation based solely on personal experience

so speaking for ourselves, we have, like, hard-mode neurology, yeah? don't get us wrong, we love what we are and wouldn't change it, but we have a predisposition towards paranoia and as kids almost all of our conversations were with ourselves (since humans didn't acknowledge us as one of them), which caused us to get pretty far off in the weeds in terms of what we care about and how we talk about it?

@xgranade like we had this entire personalized jargon which felt normal to us because everyone we talked to (ie. ourselves) understood it, you know?
@xgranade we were very fortunate in that our artistic expression was interacting with computers, which are very rigorous in their demands. use a traditional programming language to tell a computer to do something and you will get nowhere unless you've fully understood what you're asking it to do, so that was a lot of forced practice of our science skills, our ability to test things against measurable reality
@xgranade and then later in life, after transition gave us common ground with humans and interacting with them became an option, we had to learn a lot of specific skills to understand the consensus reality that people live in and kind of funnel everything through the stuff we have in common so that there's, like, a mutually intelligible purpose for it? because otherwise people are just confused?

@xgranade and because we had to learn all this stuff explicitly through trial and error, we're extremely aware of what the skills consist of

and, like... a generative language model is going to NOT require any of that. none of the science, none of the social skill. it will just mirror people's remarks back to them and it will never admit to not understanding and it doesn't behave any differently when people speak total nonsense to it

@xgranade so it feels perfectly clear to us that spending too much time talking to the things would result in atrophy of the trial-and-error parts of social interaction, because people doing that are not exercising that skill but the machine is faking the reward for it anyway

@xgranade again, this is total speculation. just because we can identify a plausible mechanism doesn't make this science; somebody would have to do actual research to validate our guess.

... but KNOWING that is kind of the precise thing at issue, yeah?

@ireneista YUP. But I guess why this is a fear for me is because by the time research does validate or disprove any of these guesses, this shit will have done nearly incalculable harm.
@xgranade right, absolutely. it's why we have personally been avoiding all interaction with the things. we need our brain. we're using it.
@xgranade of course that's an easy decision for us for a variety of reasons, not least that we don't want anything these tools can give us.

@xgranade it's a stark contrast though to the way we've learned new tools throughout our life, which has always started with playing around with them. in this case we're avoiding the play.

we're confident that's the right move (we wouldn't play with a live ebola virus either), but it is definitely a decision that we felt the need to think through carefully.

@ireneista Yeah, no, I can't think of any other technology where hardcore abstinence has been both my gut and reasoned response. Even with cryptocurrency, I briefly got into before reasoning my way to "oh wait, this sucks actually" (and even now, with the caveat that for some people oppressed out of the modern financial system, it's the only option no matter how much it sucks).

But LLMs are a hard fucking pass.

@ireneista (Full disclosure: I have used ChatGPT a few times for the explicit and narrowly defined purpose of better understanding the thing I'm critiquing. But that is very different from experimenting for the purpose of learning to *use* the tool.)
@xgranade @ireneista I've done the same. The failure rate generative "AI" has, in use cases that matter to me, is high enough I have been genuinely surprised at the number of people who find it useful in more than one or two very specific niche cases.

@xgranade like, we did play around with GPT-3 briefly when that was the latest thing, and that did tell us what we feel we need to know about how it works.

we do read occasional research papers on new developments with these things, which is why we feel comfortable saying there haven't been any recent innovations which would merit revisiting it.