RE: https://mastodon.social/@glyph/116220491984902015

I’ve never used any of the LLM chatbots, avoided them even in the early days, because they had all the hallmarks of a major cognitive and psychological hazard from the very beginning.

“I’m not gonna use this shit until I can be sure it won’t mess me or others up” is an entirely reasonable position to take, but for some reason it makes even those who are otherwise lukewarm on the tech quite angry.

Just to make it clear because I've got a few comments that didn't get it: I'm literally only talking about my own choices here. You do what you want. I'm not your mom.

@baldur To be completely honest... even if it wasn't for the environmental impact, or the stolen data, I still don't really know what I'd use it for.

I honestly don't see how a machine that sometimes lies is useful for me. I don't want to have to check its output. And I especially don't want to interact with the computer as if it's a person. None of this appeals to me in the slightest.

@Tijn Yeah, the sales pitch is unconvincing to say the least.
Antoine Leblanc :transHaskell: (@[email protected])

this is the main reason why i believe that chatbot addiction / chatbot psychosis is a LOT more widespread than we realise: people with a clear understanding of the ethical issues try claude once, it does a thing correctly enough, they get one-shot, and they start posting like if sephiroth was on linked-in, ethical concerns be damned. it keeps happening. it's a big part of why i refuse to touch LLMs: i am not special, i am not magically immune to the brain worms that have claimed so many smart people. if it happened to them it could happen to me.

LGBTQIA+ and Tech
@baldur
and when you see the latest declaration of sama on "a future of intelligence as a utility"..https://pouet.chapril.org/@bituur_esztreym/116221189966547581
@baldur Yeah, I've seen changes in others who use them, and I don't want to have that happen to me, I like being my weird self.

@baldur @glyph

Before I came to my current understanding, I toyed with Github Copilot, and went ”neat, I guess I could use this to explore an API”, but never really found a use where I wanted to.

Then I began to notice how it impacted mentoring sessions. The way the mentee’s brain shifted into a different gear upon seeing the automatic suggestions, and I had to start asking people to turn it off, as it was really distracting. Turned out that _I_ could read and analyze their usefulness pretty quickly, but everyone I was mentoring got stuck—two decades more experience will do that. So I came away feeling that the harm was greater than the benefit, and then began reading more about it.

All the while everyone around me was touting their excitement. The cognitive dissonance felt like I was being gaslit by everyone.

@baldur I've never used any, either, on the odd occasion when something popped up for chatting to 'someone', it became quickly obvious that it wasn't a human because it never seemed to understand what I was saying. Nowadays, I avoid even any chance of it happening that way, by using email, phone or other method. You can easily tell if an email reply is an automated or template reply, as it rarely addresses anything to do with the details you've already given.
@baldur an entirely reasonable position, but then I am not even lukewarm on the tech.

@baldur people complaining about the option of using a chatbot, in a sidebar of Firefox, caused me to occasionally experiment with chatbots in the sidebar.

When people make unreasonable demands: the listener will do the opposite of what's demanded.

Human nature.

@baldur If you could walk next door and visit another country, why would you try to build an opinion about that country based on the reports of others?

Trying a Chatbot is easy and harmless and they are amazing at what they do, and if you spend a lot of time doing research on the internet for anything, it is a far superior tool. It works as an IT help desk, as a tax advisor, even as an attorney. At a minimum it organizes your thoughts and does an incredible job of organizing the info landscape.

Lawyer Cited Fake Cases Generated by ChatGPT in Legal Filings - NYC Today

In June 2023, a federal judge in Manhattan faced an unusual problem when two lawyers submitted a legal brief citing six court cases that did not actually exist. The cases were completely fabricated by ChatGPT, complete with realistic citations, plausible holdings, and convincing legal reasoning. When confronted, one of the lawyers testified that he had asked ChatGPT whether the cases were real, and the AI had said yes. This was not an isolated incident, as courts across multiple jurisdictions have flagged similar issues with AI-generated case citations in legal filings.

National Today

@Landa That's really a silly example which doesn't support any rational assessment of the usefulness or quality of information available from AI tools. Even the title of your citation from 2023! gives away your flawed conclusion. The title says "Lawyer" cited fake cases.

Any YouTube "Ai for dummies" video will tell you how to ask for citations in your chats. If you don't know how to think critically, your thinking will not be improved by Ai. That's not the test.

@IronManIV @baldur Harmless? The documentation about their harm is growing into a small mountain. They cause environmental, cognitive, economic, and social harms. They're damaging the foundations of modern education. Certainly not harmless.

@mason @baldur I said TRYING a chatbot is harmless or the individual, and I stand by that. I also think using AI on a regular basis is not harmful to adults.

I think a case can be made that predicts broad based societal harms or harms to children or developmental or educational harms, but all of these are distinct domains of use and potential benefits or harms.

I am 100% sure that people who worry about their own brains being harmed could test this for themselves with zero risk of self harm.

@baldur

"Lukewarm on the tech" isn't quite my position, but close. On the whole, I'm in the "against" side, but mostly for environmental, ethical, and employment reasons. The possible cognito-hazzards are just a "bonus."

But I'm not a complete hater, as I've sipped the psycho-poison myself and find it interesting from the CompSci perspective.

Being cautious or completely abstaining or wanting to tear it all down are reasonable stances, that don't anger me at all.

I'd like to boost this but you quote posted a reply to a quote post. How is @[email protected] supposed to know you're talking about them?

@cy @pluralistic I'm not talking about them. The specific post I quoted reminded me to talk about why I avoid the tech, from a personal perspective.

You'll note there's nothing in the post that talks about anybody else's perspective. I could have left out the quoted post, as I did when I posted the same on Bluesky, but I left it in here so people could see what triggered the thought

It's fine, mostly I'm just frustrated with the "quote post" mechanic and how it's being executed. You can also copy and paste the URI into your post, and add "#TotallyNotAQuotePost" to the tags. (Though that's considered a "web mention" which some software does notify people about, unlike quote posts.)
@cy Thanks. I probably should pay more attention to how other people use quote posts and adjust my behaviour accordingly.
I've been avoiding 'em myself, but I can't say I know the perfect solution.
@baldur @monkeyborg @glyph I’m lukewarm and share your skepticism. I’m absolutely certain it’s brain poison. People have committed suicide using LLM. It’s just cleaner than using ad-rotten search engines for some stuff >_>