As mentioned before, I hate bringing this up because I have no evidence or expertise here, just a gut feeling. But I just can't help feeling like, aside from everything else aside about LLM chatbots, they're quickly becoming the leaded gasoline of our time.

Something doing real damage to human cognition, but in this diffuse and difficult to measure kind of way.

Many, not nearly all but *many*, folks using this things seem (again, as a gut feeling) to just talk differently after contact with chatbots? I can't even quite put my finger on it, but it scares the shit out of me.

It's not even an argument against chatbots, I have plenty of arguments that are far better substantiated, it's a personal fear about what they're doing.

@xgranade the following is pure speculation based solely on personal experience

so speaking for ourselves, we have, like, hard-mode neurology, yeah? don't get us wrong, we love what we are and wouldn't change it, but we have a predisposition towards paranoia and as kids almost all of our conversations were with ourselves (since humans didn't acknowledge us as one of them), which caused us to get pretty far off in the weeds in terms of what we care about and how we talk about it?

@xgranade like we had this entire personalized jargon which felt normal to us because everyone we talked to (ie. ourselves) understood it, you know?
@xgranade we were very fortunate in that our artistic expression was interacting with computers, which are very rigorous in their demands. use a traditional programming language to tell a computer to do something and you will get nowhere unless you've fully understood what you're asking it to do, so that was a lot of forced practice of our science skills, our ability to test things against measurable reality
@xgranade and then later in life, after transition gave us common ground with humans and interacting with them became an option, we had to learn a lot of specific skills to understand the consensus reality that people live in and kind of funnel everything through the stuff we have in common so that there's, like, a mutually intelligible purpose for it? because otherwise people are just confused?

@xgranade and because we had to learn all this stuff explicitly through trial and error, we're extremely aware of what the skills consist of

and, like... a generative language model is going to NOT require any of that. none of the science, none of the social skill. it will just mirror people's remarks back to them and it will never admit to not understanding and it doesn't behave any differently when people speak total nonsense to it

@xgranade so it feels perfectly clear to us that spending too much time talking to the things would result in atrophy of the trial-and-error parts of social interaction, because people doing that are not exercising that skill but the machine is faking the reward for it anyway

@xgranade again, this is total speculation. just because we can identify a plausible mechanism doesn't make this science; somebody would have to do actual research to validate our guess.

... but KNOWING that is kind of the precise thing at issue, yeah?

@ireneista YUP. But I guess why this is a fear for me is because by the time research does validate or disprove any of these guesses, this shit will have done nearly incalculable harm.

@xgranade @ireneista

This is essentially what my argument has been, for decades, about the effect rewiring brains for operating personal automobiles has had on society. Entire populations trained in quickly evaluating information for rapid dismissal, because dwelling on any one thing for even microseconds too long, at those speeds, can get you and others killed.

Which habit of processing cannot help but be transferred to other domains, where there is no life-or-death cost of not dismissing information rapidly, but neither is there any nearly as determinative countervailing consequence of not slowing down those split second dismissals.

With regard to interfacing with the extrusion-ends of LLMs, this represents the culmination of a process of indelibility that Socrates was already complaining about, atrophying capacities that are not exercised by reading static text.

To wit, "consensus reality that people live in" was already a result of a media machine of canonical texts (media as in mediums, not institutions), this desiring machine not faking, as such, but nonetheless undergirding, thus rewarding, social interaction of shibboleths.

All LLMs have done is reify this absence of trial-and-error dialectic. The consensus zeitgiest (fourth estate), existing only to replicate itself through the bodies of humans, having escaped even the containment of citation.

@beadsland that’s a really interesting theory that I (non-driver for over 20yrs, complex reasons) have never thought about. Don’t want to hijack a very interesting AI convo, but I will mull over it. Slowly. Thanks.