I sometimes wonder how people could cope with the fact that they actually enslaved people, but then I remember that they merely treated them as "tools", "just a tool" did they say when talking about them.

I wonder how big from a neurological pov the difference between enslaving a "just a tool" AI "agent" and an actual human "slave" is.

We also build fake relationships with random 2D characters, so it's not like it's entirely absurd, is it?

Dunno.. maybe it's just me, but I find it interesting to wonder what it does with people.

Like people actually got emotionally shattered when their AI "friend" out of the sudden changed character due to a model update.

And I'm curious how many sincere emotions and character an AI agent could trigger in humans. Like I don't believe for a second that people "fall" for AI agents because they think they have AGI in front of them, but rather because it feels human enough to the brain.

@karolherbst I don’t think people “fall in love” with LLMs because it’s so much like a human, but because it acts nothing like a human and these people are just that emotionally stunted. They do just want a slave. They want to always be validated and coddled and catered to without having to give anything back or introspect or consider they might be wrong or do anything that it takes to form a genuine connection with a real person
@danirabbit okay, but here is the thing. Isn't what you describe also how some people approach actually human relationships as well?
@karolherbst yes but that doesn’t mean they treat the AI as if it were human, it means they treat humans as if they were objects
@danirabbit yeah so.. does that mean, that people willingly and openly embracing AI are more likely to treat humans as objects and people who feel very weirded out by the thought of "owning" an AI agent, are not?
@karolherbst I’m not sure if we can fairly make that conclusion across the board but that’s my gut feeling about a lot of people who say they’ve fallen in love with an LLM. I think they have a shallow view of what it means to be loved. I’d like to see actual research around it.

@danirabbit seeing how people fall into the trap of para-social friendships, especially on social media, I wouldn't at all be surprised if we are steering towards an even bigger issue with genAI long-term..

yeah.. we really should do research on this, but sadly this often is only possible after the fact.

Imagine your personal AI assistance is telling you which upstream maintainer to harass today instead of social media posts 🙃 and then thinking you are doing good by doing justified call-outs