This story about ChatGPT causing people to have harmful delusions has mind-blowing anecdotes. It's an important, alarming read.

https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/

People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

Marriages and families are falling apart as people are sucked into fantasy worlds of spiritual prophecy by AI tools like OpenAI's ChatGPT

Rolling Stone
@grammargirl there's a new post from someone like this every day on Reddit 😟 super alarming
@janeadams I've been seeing studies for a while about how persuasive it can be, but this is next level.
@grammargirl Also a little disconcerting that Rolling Stone is citing the Center for AI Safety here.. like the quote itself isn't problematic but that org has some deep ties with the effective altruism and longtermism movements and has some pretty fringe perspectives 😬
@janeadams @grammargirl citing troubling sources is a thing that only happens on days that end in Y, at the Rolling Stone
@grammargirl @janeadams Good grief. AI, the real life invasion of the body snatchers. I have a friend who has fallen down the AI rabbit hole. :(

@grammargirl @janeadams

Can you share some of them? I would like to read and share some of them.

I keep a public list about these things at https://notes.bayindirh.io/notes/Lists/Discussions+about+Artificial+Intelligence

Home - bayindirh's Notes

Welcome to my notes, an open notebook of what I know, what I'm working on, and what I'm planning to ponder on for the near and far future. This place will be constantly under construction, and will m…

bayindirh's Notes

@grammargirl

This is so baffling to me. I've seen a less extreme version of this from some people ... who I thought would have known better.

What I find infuriating is how the industry selling LLMs has encouraged this kind of thinking with alarmist "AI might take over" and "here comes the singularity" talk. So irresponsible and dishonest.

That said, if you lose someone to an AI fueled spiritual fantasy I think in the absence of the tech it would have just been something else.

@futurebird Maybe it would have been something else for some people, but I saw someone say that it's like everyone who is susceptible to cult thinking now has 24-7 direct access to a cult leader, and for every person it is designed to reel them in specifically. And that felt true to me.

@grammargirl @futurebird it's just shy of 2 years old now, but I've continued to believe this is the best explanation. My first instinct was to read your article as the tragic "logical conclusion" of the line of thought:

https://softwarecrisis.dev/letters/llmentalist/

> I first thought that these were just classic cases of tech bubble enthusiasm, but no [...] This specific blend of awe, disbelief, and dread all sound like the words of a victim of a mentalist scam artist—psychics

The LLMentalist Effect: how chat-based Large Language Models rep…

The new era of tech seems to be built on superstitious behaviour

Out of the Software Crisis
@jhwgh1968 @futurebird Thanks, that was an interesting article. In the cold reading part, I kept thinking about how much info ChatGPT could have about someone who has used it for any length of time, especially if they don't turn of memory.
@grammargirl @jhwgh1968 @futurebird
Or if turning off memory isn’t as complete as they claim (as suggested in the article in the original post)
@jhwgh1968 @grammargirl @futurebird oh yeah, @baldur is on here and this article covers some of the same ground as the recent rolling stone piece
@jhwgh1968 @grammargirl @futurebird very interesting artcile, thank you for sharing it!

@grammargirl

I had a friend who thought chat GPT was "very helpful with criticism of poetry" ... and that's true in the sense that it will look at phrases in your poem and Frankenstein together some responses with the general form and tone of what real people have said about real poems with similar patterns.

Maybe that might give someone some ideas for revisions along stereotypical lines.

It can't scan rhythm. It could easily miss the whole point and lead you to make derivative revisions.

@grammargirl

I tried to explain why I thought it was a less than ideal method of getting feedback.

But, bumped up against a LOT of resistance. More than made sense to me.

So I decided to try it with one of my own poems.

The "feedback" was very flattering, ego-stroking in tone. Which made me really uncomfortable. I have no reason to think any real person might respond in the same way.

But I could see how if it seemed like "meaningful" feedback being told it's not wouldn't be pleasant.

@grammargirl

Asking a LLM about your poems isn't the same as turning to it for religion... but I think it's along the same lines.

@futurebird @grammargirl Like asking your mom. Which everyone tells you never to do.

@Wyatt_H_Knott @futurebird @grammargirl I implore you wonderful people to try local LLMs with an open mind. Like most powerful tools, you get out of it what you put in and it can surely be misused.

Once it's running on an aging, modest computer right next to you, it's hard to not notice the staggering implications for our relationship with society's existing power structures. I have a personal tutor right out of Star Trek that nobody can take away from me or use to spy on me. Apart from all the cool automation and human-machine interaction it opens up, for an increasing number of tasks I can completely stop using search engines.

I think the "AI is just NFTs again but more evil; it just means the inevitable acceleration of billionaire takeover and ecosystem destruction; everyone who uses it is a gullible rube at best and an enemy at worst" clickbait ecosystem is misleading and unhealthy. It fuels conspiratorial thinking, incurious stigma, and a lot of unnecessary infighting.

@Wifiwits @futurebird @Wyatt_H_Knott @grammargirl I'm trying to have a real conversation about an important subject, can you please be a child elsewhere?
@necedema @futurebird @Wyatt_H_Knott @grammargirl as I read here recently, an organic, fair trade, co-operative owned, open source oil rig… is still an oil rig.
@necedema @Wifiwits @futurebird @Wyatt_H_Knott @grammargirl You're using chatgpt to sealion from a kolektiva.social account & it's extremely funny.
@necedema
Everyone loves their bubbles and hates them being popped. It's the same everywhere.
@Wifiwits @futurebird @Wyatt_H_Knott @grammargirl
@necedema All else aside I got curious: how does it replace search?
@Mabande @necedema In practice, it does replace search for many people, but it shouldn’t. People ā€œlook upā€ things with LLMs that would just have been a Wikipedia lookup, and instead they get an incorrect answer that they trust more. It’s the 2020s version of ā€œthe computer said it, it must be trueā€.
@ahltorp @Mabande Funny, the main reason I've taken to replacing search with LLMs for technical subjects is that they tend to give more accurate responses than the random humans who pollute search results with uninformed drivel.
@necedema Oh yeah, search is horrible these days! I was just kinda wondering how you verify the LLM output / sources?

@Mabande Compiler, shell, or other parser error, typically. For a lot of things, you can just look at it and tell whether the answer is correct or not. If you're unsure, you can ask the LLM for more surrounding information and a lot of times it will correct itself if it was wrong. You can also hook them up to tools or Retrieval-Augmented Generation (RAG) so they can fetch updated information and pluck out the details you need.

The people who build a personality out of complaining about things seriously misrepresent LLM inaccuracy and the woes that may ensue, in my experience. Notice how nobody provides full logs in any of these articles where they make outrageous claims.

@necedema Thanks!
I think the difference in experience could relate to domain knowledge?
Like, you seem to know tech on a fundamental level, so you can both prompt correctly and see if the answer's wrong for what you use it for, while a more low-knowledge user won't prompt correctly nor realize it's a wrong answer.
So then someone like the partners in the article asking Big Questions without knowing how to think about the answers will be awed by the confident sounding positive feedback loop.

@Mabande I understand what you're saying, but in my view the difference is simply about whether someone is open minded enough to practice and improve with a new tool that is being actively stigmatized. LLMs don't just produce randomly wrong output; with simpler questions, reliability approaches 100%. Lots of people will feed it a confusing word salad one time so it produces a stupid output and they can self-assuredly complain on social media like they know what they're talking about. It's about prejudicial reinforcement, not personal growth or understanding. Regardless, I don't think this criticism (or any other that I have read) is unique to LLMs; inexperienced computer users have a harder time getting good information regardless of the tool they use.

As for those people in the article, I think it's safe to say they weren't mentally healthy in the first place. There are many ways that people seek to reinforce paranoid delusions, it doesn't surprise me that LLMs would be one of them. Nonetheless, I think it's telling that logs are virtually never provided to support the more extreme claims people make.

@Mabande @necedema
Exactly. It's just like any powerful tool, like say a power saw. Whether or not you understand it and how to use it will determine whether or not you can use it to do amazing things, or cut off someone's arm (even your own). So it's not so much a matter of using them with an open mind, but instead an informed one. I know that I myself have used them successfully many times, but I know how to prompt them, and how and when to check their answers.
@murdoc @Mabande It is necessarily a process of learning, which cannot begin if one adheres to self-imposed dogma. An open mind is essential. Check several of the other replies to me and ask yourself if they're going to be learning anything.
@necedema @Mabande
Well yes, having an open mind would be essential to having an informed one. I was just meaning that it was not enough by itself. That way can lead to the problems of the original topic.
@Mabande @necedema I use it to find out about things I wouldn't know to search in the first place. Like, I'll describe an idea or thought or method, and it will give me a name for it. Or I can ask for related/similar concepts. Then, I use a standard search engine to read up on it from a source that's actually reliable, with the new term(s) in hand.
@Mabande @necedema Not a replacement, but an augmentation.

@necedema @Wyatt_H_Knott @grammargirl

Do ā€œlocal LLMsā€ use training data compiled locally only? Or do you have a copy (a snapshot) of the associative matrices used by cloud based LLMs stored locally so you can run LLM prompts without an internet connection?

@futurebird @necedema @Wyatt_H_Knott @grammargirl If you have the memory & compute power on your local machine to actually run it, you can download the whole thing and run it locally, completely disconnected. It's conceivable that you could use only local training data, but good luck gathering enough local data. Also, training it would be unbelievably time-consuming if you don't use the cloud and you want anything robust. What's usually done is you fine-tune a pre-trained model on your own data, and then feed it local data and system prompts to make the responses appropriate to your use case.

@hosford42 @necedema @Wyatt_H_Knott @grammargirl

So, almost no one is using this tool in this way.

Very few are running these things locally. Fewer still creating their own (attributed, responsibly obtained) data sources. What that tells me is this isn’t about the technology that allows this kind of recomposition of data it’s about using (exploiting) the vast sea of information online in a novel way.

@futurebird @hosford42 @necedema @Wyatt_H_Knott @grammargirl The assumption that it's Star Trek computer accurate with its replies once you've done this is so wide of the mark, though. Moments ago I asked Google about the formula for calculating resonance of a tuned LC circuit and two of the steps in the four step AI reply were completely pointless and didn't do what it was saying they did.
@synx508 @futurebird @necedema @Wyatt_H_Knott @grammargirl When I use LLMs, I easily spend half of the time weeding the content to identify what's actually useful. Even the best model for the task has this problem.
@hosford42 @futurebird @necedema @Wyatt_H_Knott @grammargirl They're like a rubber-ducking system, sometimes, I guess that is useful but it's possible to fool yourself into believing the intelligence you're using to hammer the machine into producing the shape of answer you're looking for is the machine's rather than yours. Clever people used to be able to have hilarious conversations with megaHAL on the same basis. Lately I've been throwing subtly incorrect statements into the Google search box to see how it would patronisingly correct me. It doesn't always do that, about 25% of the time it'll create an answer founded on the belief that I must be correct and it somehow didn't already know something - forming its own pseudo-beliefs to support my statement. I don't think this is great or useful, it is dangerous.

@hosford42 @necedema @Wyatt_H_Knott @grammargirl

It’s like the sophomoric ā€œhackā€ for keeping up with all the damned five paragraph essays. Just search on the internet for a few documents with good-sounding paragraphs. Copy and paste chunks of sentences into a word document. Then carefully read and reword it all so it’s ā€œnot plagiarismā€

This is still plagiarism.

GPT will cheerfully help with that last step now.

@hosford42 @necedema @Wyatt_H_Knott @grammargirl

And yet none of the English and History teachers I know are very worried about this. Because the results of this process are always C-student work. The essay has no momentum, no cohesive idea justifying its existence beyond ā€œI needed to turn some words inā€

It’s swill.

@futurebird @hosford42 @necedema @Wyatt_H_Knott @grammargirl

English and History teachers have not been thinking their whole life about (a) the position of humans in a technological society and (b) the fallout from technology.

I seriously think the don't see the holocaust of culture coming.

@futurebird this sounds a lot harder than actually writing somethingšŸøšŸ˜¹

@futurebird

@hosford42 @necedema @Wyatt_H_Knott @grammargirl

There's a whole movement of running them locally but it's niche, though the smaller versions are getting better.

But nobody is creating them locally. Fine-tuning with your own data yes, but that's a little sauce over existing training data.

@futurebird @hosford42 @necedema @Wyatt_H_Knott @grammargirl if you're interested in more "ethically trained" LLMs the Allen AI institute have been doing really interesting work https://allenai.org/
Ai2: Truly open breakthrough AI

Ai2, founded by Paul Allen and led by Ali Farhadi, conducts high-impact research and engineering to tackle key problems in artificial intelligence.

@hosford42 @futurebird @necedema @Wyatt_H_Knott @grammargirl how can you have all that training data stored locally? if stored locally u certainly can't use it as a search engine for the wealth of material on the www.

i don't understand

@barrygoldman1 @hosford42 @futurebird @necedema @Wyatt_H_Knott @grammargirl There's a global wave of data center construction. They're not spending hundreds of billions of dollars building out storage and processing for data you can handle locally on your phone.

@barrygoldman1 @hosford42 @futurebird @necedema @Wyatt_H_Knott @grammargirl

Here's the full Deepseek model, for example https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-671B

And a version you can run on an 8GB GPU locally https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-7B

Edit: you don't need the source training data.

deepseek-ai/DeepSeek-Prover-V2-671B Ā· Hugging Face

We’re on a journey to advance and democratize artificial intelligence through open source and open science.

@barrygoldman1 @hosford42 @futurebird @necedema @Wyatt_H_Knott @grammargirl I mean, online ones aren't good search engines either. But yes, you can have them stored locally. They essentially work somewhat like a database, but in a way it's sort of lossy compression, so they can be small enough to fit locally. Especially if you increase the lossiness.

A local model can actually be decent quality compared to the online ones despite "dumbing down." The losses tend to be mostly fluff. Unfortunately, the training data is still the real problem. If you ask about a quest in Skyrim it can probably tell you right. If you ask about some show you saw ten years ago few people ever heard of it might recognize it and tell you a bit, but will probably get characters wrong or something.

@barrygoldman1 @futurebird @necedema @Wyatt_H_Knott @grammargirl You can't. Not if you have enough of it to fully train an LLM. That's kind of my point.
@futurebird @necedema @Wyatt_H_Knott @grammargirl The latter, generally. If you look on Huggingface you can find different size models that can be run locally depending on how much GPU RAM you have etc.
@futurebird Almost always the latter. So they're still trained in the cloud (on completely proprietary training data) using ethically-dubious (and frequently legally-dubious) methods. The only actual upside over the cloud models is prompt privacy, as you can usually verify/enforce that those never leave the device.