People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

https://lemmy.dbzer0.com/post/43566351

People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies - Divisions by zero

cross-posted from: https://lemmy.dbzer0.com/post/43566349 [https://lemmy.dbzer0.com/post/43566349]

Seems like the flat-earthers or sovereign citizens of this century
TLDR: Artificial Intelligence enhances natural stupidity.
Humans are irrational creatures that have brief and transitory states where they are capable of more ordered thought. It is our mistake to reach a conclusion that humans are rational actors while we marvel daily at the irrationality of others while remaining blind to our own.
Precisely. We like to think of ourselves as rational but we’re the opposite. Then we rationalize things afterwards. Even being keenly aware of this doesn’t stop it in the slightest.
Probably because stopping to self analyze your decisions is a lot less effective than just running away from that lion over there.
It’s a luxury state: analysis; whether self or professionally administered on a chaise lounge at $400 per hour.
Self awareness is a rare, and valuable, state.
TBF, that should be the conclusion in all contexts where "AI" are cconcerned.
The one thing you can say for AI is that it does many things faster than previous methods…
You mean worse?
Bad results are nothing new.

Yep.

And after enough people can no longer actually critically think, well, now this shitty AI tech does actually win the Turing Test more broadly.

Why try to clear the bar when you can just lower it instead?

… Is it fair, at this point, to legitimately refer to humans that are massively dependant on AI for basic things… can we just call them NPCs?

I am still amazed that no one knows how to get anywhere around… you know, the town or city they grew up in? Nobody can navigate without some kind of map app anymore.

can we just call them NPCs?

They were NPCs before AI was invented.

Dehumanization is happening often and fast enough without acting like ignorant, uneducated, and/or stupid people aren’t “real” people.

I get it, some people seem to live their whole lives on autopilot, just believing whatever the people around them believe and doing what they’re told, but that doesn’t make them any less human than anybody else.

Don’t let the fascists win by pretending they’re not people.

Dehumanizing the enemy is part of any war, otherwise it’s more difficult to unalive them. It’s a tribal quality, not a fascist one.
“Unalive” is an unnecessary euphemism here. Please just say kill.
I forget Lemmy isn’t full of adult children and fascist algorithms that censor you.

Haha I grew up before smartphones and GPS navigation was a thing, and I never could navigate well even with a map!
GPS has actually been a godsend for me to learn to navigate my own city way better. Because I learn better routes in first try.

Navigating is probably my weakest “skill” and is the joke of the family. If I have to go somewhere and it’s 30km, the joke is it’s 60km for me, because I always take “the long route”.

But with GPS I’ve actually become better at it, even without using the GPS.

I don’t know if it’s necessarily a problem with AI, more of a problem with humans in general.

Hearing ONLY validation and encouragement without pushback regardless of how stupid a person’s thinking might be is most likely what creates these issues in my very uneducated mind. It forms a toxically positive echo-chamber.

The same way hearing ONLY criticism and expecting perfection 100% of the time regardless of a person’s capabilities or interests created depression, anxiety, and suicidal ideation and attempts specifically for me. But I’m learning I’m not the only one with these experiences and the one thing in common is zero validation from caregivers.

I’d be ok with AI if it could be balanced and actually pushback on batshit crazy thinking instead of encouraging it while also able to validate common sense and critical thinking. Right now it’s just completely toxic for lonely humans to interact with based on my personal experience. If I wasn’t in recovery, I would have believed that AI was all I needed to make my life better because I was (and still am) in a very messed up state of mind from my caregivers, trauma, and addiction.

I’m in my 40s, so I can’t imagine younger generations being able to pull away from using it constantly if they’re constantly being validated while at the same time enduring generational trauma at the very least from their caregivers.

I’m also in your age group, and I’m picking up what you’re putting down.

I had a lot of problems with my mental health thatbwere made worse by centralized social media. I can see hoe the younger generation will have the same problems with centralized AI.

Not trying to speak like a prepper or anythingz but this is real.

One of neighbor’s children just committed suicide because their chatbot boyfriend said something negative. Another in my community a few years ago did something similar.

Something needs to be done.

This happened less than a year ago. Doubt regulators have done much since then apnews.com/…/chatbot-ai-lawsuit-suicide-teen-arti…
AI chatbot pushed teen to kill himself, lawsuit alleges

A Florida mother is suing a tech company over an AI chatbot that she says pushed her son to kill himself. The lawsuit filed this week by Megan Garcia of Orlando alleges that Character Technologies Inc. engineered a product that pulled 14-year-old Sewell Setzer III into an emotionally and sexually abusive relationship that led to his suicide. The lawsuit says the chatbot encouraged Sewell after the teen said he wanted to take his own life. A spokesperson said Friday that the company doesn't comment on pending litigation. In a statement to The Associated Press, the company said it had created “a more stringent model” of the app for younger users.

AP News
This is the Daenerys case, for some reason it seems to be suddenly making the rounds again. Most of the news articles I've seen about it leave out a bunch of significant details so that it ends up sounding more of an "ooh, scary AI!" Story (baits clicks better) rather than a "parents not paying attention to their disturbed kid's cries for help and instead leaving loaded weapons lying around" story (as old as time, at least in America).
A Deadly Love Affair with a Chatbot ∣ Sewell Setzer was a happy child - before he fell in love with Google's AI chatbot and took his own life at 14. - Technology - Fedia

Not only in America.

I loved GOT, I think Daenerys is a beautiful name, but still, there’s something about parents naming their kids after movie characters. In my youth, Kevin’s started to pop up everywhere (yep, that’s how old I am). They weren’t suicidal but behaved incredibly badly so you could constantly hear their mothers screeching after them.

Daenerys was the chatbot, not the kid.

I wish I could remember who it was that said that kids’ names tend to reflect “the father’s family tree, or the mother’s taste in fiction,” though. (My parents were of the father’s-family-tree persuasion.)

Thanks for clarifying!
Like what, some kind of parenting?
But Fuckerburg said we need AI friends.
Turns out AI is really good at telling people what they want to hear, and with all the personal information users voluntary provide while chatting with their bots it’s tens to maybe hundreds times much more proficient at brainwashing its subjects than any human cult leader could ever hope to be.

Sounds like a lot of these people either have an undiagnosed mental illness or they are really, reeeeaaaaalllyy gullible.

For shit’s sake, it’s a computer. No matter how sentient the glorified chatbot being sold as “AI” appears to be, it’s essentially a bunch of rocks that humans figured out how to jet electricity through in such a way that it can do math. Impressive? I mean, yeah. It is. But it’s not a human, much less a living being of any kind. You cannot have a relationship with it beyond that of a user.

If a computer starts talking to you as though you’re some sort of God incarnate, you should probably take that with a dump truck full of salt rather then just letting your crazy latch on to that fantasy and run wild.

Or immediately question what it/its author(s) stand to gain from making you think it thinks so, at a bear minimum.

I dunno who needs to hear this, but just in case: THE STRIPPER (OR AI I GUESS) DOESN’T REALLY LOVE YOU! THAT’S WHY YOU HAVE TO PAY FOR THEM TO SPEND TIME WITH YOU!

I know it’s not the perfect analogy, but… eh, close enough, right?

a bear minimum.

I always felt that was too much of a burden to put on people, carrying multiple bears everywhere they go to meet bear minimums.

/facepalm

The worst part is I know I looked at that earlier and was just like, “yup, no problems here” and just went along with my day, like I’m in the Trump administration or something

Yeah, from the article:

Even sycophancy itself has been a problem in AI for “a long time,” says Nate Sharadin, a fellow at the Center for AI Safety, since the human feedback used to fine-tune AI’s responses can encourage answers that prioritize matching a user’s beliefs instead of facts. What’s likely happening with those experiencing ecstatic visions through ChatGPT and other models, he speculates, “is that people with existing tendencies toward experiencing various psychological issues,” including what might be recognized as grandiose delusions in clinical sense, “now have an always-on, human-level conversational partner with whom to co-experience their delusions.”

So it’s essentially the same mechanism with which conspiracy nuts embolden each other, to the point that they completely disconnect from reality?

That was my take away as well. With the added bonus of having your echo chamber tailor made for you, and all the agreeing voices tuned in to your personality and saying exactly what you need to hear to maximize the effect.

It’s eery. A propaganda machine operating on maximum efficiency. Goebbels would be jealous.

The time will come up when we look back fondly on “organic” conspiracy nuts.
human-level? Have these people used chat GPT?
I have and I find it pretty convincing.
For real. I explicitly append “give me the actual objective truth, regardless of how you think it will make me feel” to my prompts and it still tries to somehow butter me up to be some kind of genius for asking those particular questions or whatnot. Luckily I’ve never suffered from good self esteem in my entire life, so those tricks don’t work on me :p

This is the reason I’ve deliberately customized GPT with the follow prompts:

  • User expects correction if words or phrases are used incorrectly. Tell it straight—no sugar-coating. Stay skeptical and question things. Keep a forward-thinking mindset.

  • User values deep, rational argumentation. Ensure reasoning is solid and well-supported.

  • User expects brutal honesty. Challenge weak or harmful ideas directly, no holds barred.

  • User prefers directness. Point out flaws and errors immediately, without hesitation.

  • User appreciates when assumptions are challenged. If something lacks support, dig deeper and challenge it.

I prefer reading. Wikipedia is great. Duck duck go still gives pretty good results with the AI off. YouTube is filled with tutorials too. Cook books pre-AI are plentiful. There’s these things called newspapers that exist, they aren’t like they used to be but there is a choice of which to buy even.

I’ve no idea what a chatbot could help me with. And I think anybody who does need some help on things, could go learn about whatever they need in pretty short order if they wanted. And do a better job.

I still use Ecosia.org for most of my research on the Internet. It doesn't need as much resources to fetch information as an AI bot would, plus it helps plant trees around the globe. Seems like a great deal to me.
People always forget about the energy it takes. 10 years ago we were shocked about the energy a Google factory needs to run; now imagine that orders of magnitude larger, and for what?
I often use it to check whether my rationale is correct, or if my opinions are valid.
You do know it can’t reason and literally makes shit up approximately 50% of the time? Be quicker to toss a coin!
Actually, given the aforementioned prompts, its quite good at discerning flaws in my arguments and logical contradictions.

Yeah this is my experience as well.

People you’re replying to need to stop with the “gippity is bad” nonsense, it’s actually a fucking miracle of technology. You can criticize the carbon footprint of the corpos and the for-profit nature of the endeavour that was ultimately created through taxpayer-funded research at public institutions without shooting yourself in the foot by claiming what is very evidently not true.

In fact, if you haven’t found a use for a gippity type chatbot thing, it speaks a lot more about you and the fact you probably don’t do anything that complicated in your life where this would give you genuine value.

The article in OP also demonstrates how it could be used by the deranged/unintelligent for bad as well, so maybe it’s like a dunning-kruger curve.

Granted, it is flakey unless you’ve configured it not to be a shit cunt. Before I manually set these prompts and memory references, it talked shit all the time.

…you probably don’t do anything that complicated in your life where this would give you genuine value.

God that’s arrogant.

I know, and that’s fair. But am I wrong? That’s what matters more than anything else.

I make a lot of bold statements on this account, but I never do so lightly or unthinkingly.

Given your prompts, maybe you are good at discerning flaws and analysing your own arguments too
I’m good enough at noticing my own flaws, as not to be arrogant enough to believe I’m immune from making mistakes :p

Well one benefit is finding out what to read. I can ask for the name of a topic I’m describing and go off and research it on my own.

Search engines aren’t great with vague questions.

There’s this thing called using a wide variety of tools to one’s benefit; You should go learn about it.

You search for topics and keywords on search engines. It’s a different skill. And from what I see, yields better results. If something is vague also, think quickly first and make it less vague. That goes for life!

And a tool which regurgitates rubbish in a verbose manner isn’t a tool. It’s a toy. Toy’s can spark your curiosity, but you don’t rely on them. Toy’s look pretty, and can teach you things. The lesson is that they aren’t a replacement for anything but lorem ipsum

Buddy that’s great if you know the topic or keyword to search for, if you don’t and only have a vague query that you’re trying to find more about to learn some keywords or topics to search for, you can use AI.

You can grandstand about tools vs toys and what ever other Luddite shit you want, at the end of the day despite all your raging you are the only one going to miss out despite whatever you fanatically tell yourself.

I’m still sceptical, any chance you could share some prompts which illustrate this concept?

Sure an hour ago I had watched a video about smaller scales and physics below planck length. And I was curious, if we can classify smaller scales into conceptual groups, where they interact with physics in their own different ways, what would the opposite end of the spectrum be. From there I was able to ‘chat’ with an AI and discover and search wikipedia for terms such as Cosmological horizon, brane cosmology, etc.

In the end there was only theories on higher observable magnitudes, but it was a fun rabbit hole I could not have explored through traditional search engines - especially not the gimped product driven adsense shit we have today.

Remember how people used to say you can’t use Wikipedia, it’s unreliable. We would roll our eyes and say “yeah but we scroll down to the references and use it to find source material”? Same with LLM’s, you sort through it and get the information you need to get the information you need.