This story about ChatGPT causing people to have harmful delusions has mind-blowing anecdotes. It's an important, alarming read.

https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/

People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

Marriages and families are falling apart as people are sucked into fantasy worlds of spiritual prophecy by AI tools like OpenAI's ChatGPT

Rolling Stone

@grammargirl

This is so baffling to me. I've seen a less extreme version of this from some people ... who I thought would have known better.

What I find infuriating is how the industry selling LLMs has encouraged this kind of thinking with alarmist "AI might take over" and "here comes the singularity" talk. So irresponsible and dishonest.

That said, if you lose someone to an AI fueled spiritual fantasy I think in the absence of the tech it would have just been something else.

@futurebird Maybe it would have been something else for some people, but I saw someone say that it's like everyone who is susceptible to cult thinking now has 24-7 direct access to a cult leader, and for every person it is designed to reel them in specifically. And that felt true to me.

@grammargirl

I had a friend who thought chat GPT was "very helpful with criticism of poetry" ... and that's true in the sense that it will look at phrases in your poem and Frankenstein together some responses with the general form and tone of what real people have said about real poems with similar patterns.

Maybe that might give someone some ideas for revisions along stereotypical lines.

It can't scan rhythm. It could easily miss the whole point and lead you to make derivative revisions.

@grammargirl

I tried to explain why I thought it was a less than ideal method of getting feedback.

But, bumped up against a LOT of resistance. More than made sense to me.

So I decided to try it with one of my own poems.

The "feedback" was very flattering, ego-stroking in tone. Which made me really uncomfortable. I have no reason to think any real person might respond in the same way.

But I could see how if it seemed like "meaningful" feedback being told it's not wouldn't be pleasant.

@grammargirl

Asking a LLM about your poems isn't the same as turning to it for religion... but I think it's along the same lines.

@futurebird @grammargirl Like asking your mom. Which everyone tells you never to do.

@Wyatt_H_Knott @futurebird @grammargirl I implore you wonderful people to try local LLMs with an open mind. Like most powerful tools, you get out of it what you put in and it can surely be misused.

Once it's running on an aging, modest computer right next to you, it's hard to not notice the staggering implications for our relationship with society's existing power structures. I have a personal tutor right out of Star Trek that nobody can take away from me or use to spy on me. Apart from all the cool automation and human-machine interaction it opens up, for an increasing number of tasks I can completely stop using search engines.

I think the "AI is just NFTs again but more evil; it just means the inevitable acceleration of billionaire takeover and ecosystem destruction; everyone who uses it is a gullible rube at best and an enemy at worst" clickbait ecosystem is misleading and unhealthy. It fuels conspiratorial thinking, incurious stigma, and a lot of unnecessary infighting.

@necedema @Wyatt_H_Knott @grammargirl

Do “local LLMs” use training data compiled locally only? Or do you have a copy (a snapshot) of the associative matrices used by cloud based LLMs stored locally so you can run LLM prompts without an internet connection?

@futurebird @necedema @Wyatt_H_Knott @grammargirl If you have the memory & compute power on your local machine to actually run it, you can download the whole thing and run it locally, completely disconnected. It's conceivable that you could use only local training data, but good luck gathering enough local data. Also, training it would be unbelievably time-consuming if you don't use the cloud and you want anything robust. What's usually done is you fine-tune a pre-trained model on your own data, and then feed it local data and system prompts to make the responses appropriate to your use case.

@hosford42 @necedema @Wyatt_H_Knott @grammargirl

So, almost no one is using this tool in this way.

Very few are running these things locally. Fewer still creating their own (attributed, responsibly obtained) data sources. What that tells me is this isn’t about the technology that allows this kind of recomposition of data it’s about using (exploiting) the vast sea of information online in a novel way.

@futurebird @hosford42 @necedema @Wyatt_H_Knott @grammargirl The assumption that it's Star Trek computer accurate with its replies once you've done this is so wide of the mark, though. Moments ago I asked Google about the formula for calculating resonance of a tuned LC circuit and two of the steps in the four step AI reply were completely pointless and didn't do what it was saying they did.
@synx508 @futurebird @necedema @Wyatt_H_Knott @grammargirl When I use LLMs, I easily spend half of the time weeding the content to identify what's actually useful. Even the best model for the task has this problem.
@hosford42 @futurebird @necedema @Wyatt_H_Knott @grammargirl They're like a rubber-ducking system, sometimes, I guess that is useful but it's possible to fool yourself into believing the intelligence you're using to hammer the machine into producing the shape of answer you're looking for is the machine's rather than yours. Clever people used to be able to have hilarious conversations with megaHAL on the same basis. Lately I've been throwing subtly incorrect statements into the Google search box to see how it would patronisingly correct me. It doesn't always do that, about 25% of the time it'll create an answer founded on the belief that I must be correct and it somehow didn't already know something - forming its own pseudo-beliefs to support my statement. I don't think this is great or useful, it is dangerous.