As mentioned before, I hate bringing this up because I have no evidence or expertise here, just a gut feeling. But I just can't help feeling like, aside from everything else aside about LLM chatbots, they're quickly becoming the leaded gasoline of our time.

Something doing real damage to human cognition, but in this diffuse and difficult to measure kind of way.

Many, not nearly all but *many*, folks using this things seem (again, as a gut feeling) to just talk differently after contact with chatbots? I can't even quite put my finger on it, but it scares the shit out of me.

It's not even an argument against chatbots, I have plenty of arguments that are far better substantiated, it's a personal fear about what they're doing.

@xgranade the following is pure speculation based solely on personal experience

so speaking for ourselves, we have, like, hard-mode neurology, yeah? don't get us wrong, we love what we are and wouldn't change it, but we have a predisposition towards paranoia and as kids almost all of our conversations were with ourselves (since humans didn't acknowledge us as one of them), which caused us to get pretty far off in the weeds in terms of what we care about and how we talk about it?

@xgranade like we had this entire personalized jargon which felt normal to us because everyone we talked to (ie. ourselves) understood it, you know?
@xgranade we were very fortunate in that our artistic expression was interacting with computers, which are very rigorous in their demands. use a traditional programming language to tell a computer to do something and you will get nowhere unless you've fully understood what you're asking it to do, so that was a lot of forced practice of our science skills, our ability to test things against measurable reality
@xgranade and then later in life, after transition gave us common ground with humans and interacting with them became an option, we had to learn a lot of specific skills to understand the consensus reality that people live in and kind of funnel everything through the stuff we have in common so that there's, like, a mutually intelligible purpose for it? because otherwise people are just confused?

@xgranade and because we had to learn all this stuff explicitly through trial and error, we're extremely aware of what the skills consist of

and, like... a generative language model is going to NOT require any of that. none of the science, none of the social skill. it will just mirror people's remarks back to them and it will never admit to not understanding and it doesn't behave any differently when people speak total nonsense to it

@xgranade so it feels perfectly clear to us that spending too much time talking to the things would result in atrophy of the trial-and-error parts of social interaction, because people doing that are not exercising that skill but the machine is faking the reward for it anyway

@xgranade again, this is total speculation. just because we can identify a plausible mechanism doesn't make this science; somebody would have to do actual research to validate our guess.

... but KNOWING that is kind of the precise thing at issue, yeah?

@ireneista YUP. But I guess why this is a fear for me is because by the time research does validate or disprove any of these guesses, this shit will have done nearly incalculable harm.

@xgranade @ireneista

This is essentially what my argument has been, for decades, about the effect rewiring brains for operating personal automobiles has had on society. Entire populations trained in quickly evaluating information for rapid dismissal, because dwelling on any one thing for even microseconds too long, at those speeds, can get you and others killed.

Which habit of processing cannot help but be transferred to other domains, where there is no life-or-death cost of not dismissing information rapidly, but neither is there any nearly as determinative countervailing consequence of not slowing down those split second dismissals.

With regard to interfacing with the extrusion-ends of LLMs, this represents the culmination of a process of indelibility that Socrates was already complaining about, atrophying capacities that are not exercised by reading static text.

To wit, "consensus reality that people live in" was already a result of a media machine of canonical texts (media as in mediums, not institutions), this desiring machine not faking, as such, but nonetheless undergirding, thus rewarding, social interaction of shibboleths.

All LLMs have done is reify this absence of trial-and-error dialectic. The consensus zeitgiest (fourth estate), existing only to replicate itself through the bodies of humans, having escaped even the containment of citation.

@beadsland @xgranade that's an interesting line of reasoning. it has surface plausibility, though that focus on instant decisions also does kind of seem like a thing that would be self-reinforcing once it exists, even if the original pressure were removed.

@ireneista @xgranade

Habits, as a rule, once established, and insofar as they align with one's sense of self (here, being a socially independent person, liberated by being able to drive competently), are self-reinforcing. This is, at a fundamental level, what habits are.

Instant dismissal of information would not be exempt from this rule of habituation, even without considering the compounding recursion that self-assessment of decision-making, itself, implicates non-rapid dismissal of information about one's own decision making.

So yeah, removing the original pressure resolves nothing absent conscious effort to change the habit. At least as intentional as the conscious effort that went into developing the habit in the first place.

As someone who never learned to drive, never wanted to learn to drive, who bailed on pressure to learn to drive after one lesson wherein myself was told we had been almost side-swiped by a truck myself was oblivious to even being in the parking lot with us, my experience of interacting with people who drive is not dissimilar to OP's experience of people who use LLMs.

They talk differently. They think differently. Heck they even relate to physical space and geography and the passage of time differently. All in a manner that speaks to a consensus reality myself am not, and really would prefer never to be, party to.

So too my experience of folk raised within canonicity, which due to my somewhat unconventional movement through K-12 education, largely missed me.

@beadsland that’s a really interesting theory that I (non-driver for over 20yrs, complex reasons) have never thought about. Don’t want to hijack a very interesting AI convo, but I will mull over it. Slowly. Thanks.