I sometimes wonder how people could cope with the fact that they actually enslaved people, but then I remember that they merely treated them as "tools", "just a tool" did they say when talking about them.

I wonder how big from a neurological pov the difference between enslaving a "just a tool" AI "agent" and an actual human "slave" is.

We also build fake relationships with random 2D characters, so it's not like it's entirely absurd, is it?

Dunno.. maybe it's just me, but I find it interesting to wonder what it does with people.

Like people actually got emotionally shattered when their AI "friend" out of the sudden changed character due to a model update.

And I'm curious how many sincere emotions and character an AI agent could trigger in humans. Like I don't believe for a second that people "fall" for AI agents because they think they have AGI in front of them, but rather because it feels human enough to the brain.

which could mean as a consequence, that human to AI agent behavior is best explained by assuming it's human to human behavior.

Even though that's totally not what's going on here.

I'm sure the entire academic landscape on this topic will be _wild_ and I'm sure they'll be able to show more nuance on this than I am doing here.

> Like I don't believe for a second that people "fall" for AI agents because they think they have AGI in front of them, but rather because it feels human enough to the brain.

This has a very weird consequence: it's not that people are tricked, it's their own brain tricking them instead.

okay so. Let's imagine that's a valid approach to understand how genAI "works" and "becomes successful".

Now can we assume that people with social anxiety are less likely to try out genAI tools, because it feels "too human" and triggers all the anxiety as well? And just the thought of having some "sentient" looking thing responding in human language makes brains go: "heck nah, I'm outta here"?

It would be fascinating and also scary.

@karolherbst if I might drop a very relevant link into this discussion (sorry if it's a repeat for you and I'm forgetting)

https://softwarecrisis.dev/letters/llmentalist/

The LLMentalist Effect: how chat-based Large Language Models rep…

The new era of tech seems to be built on superstitious behaviour

Out of the Software Crisis