i feel like i don't have the words to properly describe how it feels to see people who opinions i respected and valued slowly fall into ai psychosis. it's so slow and so subtle at first. "i'm just experimenting! i'm not an ai booster!"

then wait a few months, and they start explaining with the usual flawed, incoherent reasoning how actually it's all very interesting and thought-provoking, whilst pointing at an LLM that is so obviously just a reflection of their own ego.

@AngelaScholder @jacqueline ... a strange way to describe people experimenting with a new and groundbreaking technology, ofcourse those people share their experiences, that is in a way promoting but that goes for any IT people are enthusiastic about. And it's complete nonsense to call an LLM a reflection of my own ego if i use it in a RAG configuration for analysing large numbers of documents...
@AngelaScholder @jacqueline ...playing and experimenting is a good way to learn about (new) technology, it is also very human, the way we develop, find out what works and what does not.
@ErikJonker @jacqueline Well, with the ways I've seen these sites reacting to people, even just praising the writing and thoughts of people about articles they uploaded/feeded where it later came out the AI somehow couldn't read the article and just hallucinated superlatives.
Basically, an AI working like that is basically only geared to work using people their ego.
That in the end will result in the AI mirroring the ego of the 'user' (user, or abused is an interesting discussion).
And, as >2

@ErikJonker @jacqueline 2) people often are very easy influenced, they will just as much become like their chatbot as well as the clatbot reflecting on them.

The worst outcome of that is that the people basically become zombies of their chatbot.
Obviously we are all so strong that this will never happen to us...

@AngelaScholder You're describing brain downloading: from the chatbot's cloud into the wetware.

@ErikJonker @jacqueline