I sometimes wonder how people could cope with the fact that they actually enslaved people, but then I remember that they merely treated them as "tools", "just a tool" did they say when talking about them.

I wonder how big from a neurological pov the difference between enslaving a "just a tool" AI "agent" and an actual human "slave" is.

We also build fake relationships with random 2D characters, so it's not like it's entirely absurd, is it?

Dunno.. maybe it's just me, but I find it interesting to wonder what it does with people.

Like people actually got emotionally shattered when their AI "friend" out of the sudden changed character due to a model update.

And I'm curious how many sincere emotions and character an AI agent could trigger in humans. Like I don't believe for a second that people "fall" for AI agents because they think they have AGI in front of them, but rather because it feels human enough to the brain.

which could mean as a consequence, that human to AI agent behavior is best explained by assuming it's human to human behavior.

Even though that's totally not what's going on here.

I'm sure the entire academic landscape on this topic will be _wild_ and I'm sure they'll be able to show more nuance on this than I am doing here.

> Like I don't believe for a second that people "fall" for AI agents because they think they have AGI in front of them, but rather because it feels human enough to the brain.

This has a very weird consequence: it's not that people are tricked, it's their own brain tricking them instead.

okay so. Let's imagine that's a valid approach to understand how genAI "works" and "becomes successful".

Now can we assume that people with social anxiety are less likely to try out genAI tools, because it feels "too human" and triggers all the anxiety as well? And just the thought of having some "sentient" looking thing responding in human language makes brains go: "heck nah, I'm outta here"?

It would be fascinating and also scary.

@karolherbst if I might drop a very relevant link into this discussion (sorry if it's a repeat for you and I'm forgetting)

https://softwarecrisis.dev/letters/llmentalist/

The LLMentalist Effect: how chat-based Large Language Models rep…

The new era of tech seems to be built on superstitious behaviour

Out of the Software Crisis
@karolherbst I don’t think people “fall in love” with LLMs because it’s so much like a human, but because it acts nothing like a human and these people are just that emotionally stunted. They do just want a slave. They want to always be validated and coddled and catered to without having to give anything back or introspect or consider they might be wrong or do anything that it takes to form a genuine connection with a real person
@danirabbit okay, but here is the thing. Isn't what you describe also how some people approach actually human relationships as well?
@karolherbst yes but that doesn’t mean they treat the AI as if it were human, it means they treat humans as if they were objects
@danirabbit yeah so.. does that mean, that people willingly and openly embracing AI are more likely to treat humans as objects and people who feel very weirded out by the thought of "owning" an AI agent, are not?
@karolherbst I’m not sure if we can fairly make that conclusion across the board but that’s my gut feeling about a lot of people who say they’ve fallen in love with an LLM. I think they have a shallow view of what it means to be loved. I’d like to see actual research around it.

@danirabbit seeing how people fall into the trap of para-social friendships, especially on social media, I wouldn't at all be surprised if we are steering towards an even bigger issue with genAI long-term..

yeah.. we really should do research on this, but sadly this often is only possible after the fact.

Imagine your personal AI assistance is telling you which upstream maintainer to harass today instead of social media posts 🙃 and then thinking you are doing good by doing justified call-outs

@karolherbst it’s the same as people who have children because they’re lonely and want a best friend or because they want a legacy or whatever. They want to own another person. They don’t want a genuine connection

@karolherbst I mean one of them is an autocomplete engine with more training data that includes tool calling syntax and the other one is a sentient being with a family that you're forcing to do work for you.

Sure, we can have relationships with objects like dolls and souvenirs, but much like sending tokens to an LLM that's a one-way street. With a sentient being I feel like it's just categorically different because they are actually able to have thoughts about you too.

@pojntfx oh sure. Not trying to argue that AI agents are sentient or whatever, they aren't.

But like the question is does our brain have the ability to have a strict separation there or would the brain function in a similar way as if it would be a human slave?

Like sure, there is no physical contact, so that's for sure a difference, but what about a manager that just writes text to a bunch of people to give orders they have to follow, because of shitty payment vs telling an AI agent?

@pojntfx Does that make the brain function differently in any way?

Like I don't know the answer to that, it would certainly be interesting to figure that out.

@karolherbst Oh, thanks for elaborating, hmm yeah that is a very good question. I'm ngl I've seen quite a few people at least use language in prompts that is effectively the same that they give to coworkers when they ask them to complete a ticket ...

I wonder if it might have an effect on what people think about slavery, too? I've heard from more than one person here in Vancouver now that "LLMs show that people are totally OK with having a personal slave" and I was just standing there like "tf"

@karolherbst I hope that there will be research on this at some point because I could find absolutely nothing about the topic so far myself

@pojntfx yeah.. like the reason I'm not using the agents isn't because some rational "this is bad", but rather that I'm totally weirded out by the thought alone. Like feels wrong on a deeply fundamental level. Of course I can rationalize those feelings, but still...

I would feel equally weirded out about having anybody work beneath me with a significant power balance and not eye to eye.

@karolherbst @pojntfx the thing that worries me is the inverse

That is, I think that one day within my lifetime but probably not soon we will develop something that could reasonably be configured sentient, and we will consider it a tool
@erincandescent @karolherbst @pojntfx that won't be worse that our collective failure to configure as sentient specific classes of other humans, TBH

@karolherbst Its neurological in a sense that there has always been this huge gap between technically literate and non-literate people.

People who used to blatantly believe whatever on the internet could be helped by some sane way in the pre-LLM era. But this has become worse with chatbots because they are built to converse easily and can easily feed in to the fantasies of the delusional. I think a lot of us in helpdesk can relate to being confronted by stubborn users just because "ChatGPT said...". This is just the tip of the iceberg.

As painful as it is, my take is that the technically literate ones should help, one at a time and when they can, as they always did. 404media happened to post on the same topic yesterday, its a good read (however, you will need to sign up to read this piece).

https://www.404media.co/ai-psychosis-help-gemini-chatgpt-claude-chatbot-delusions/

How to Talk to Someone Experiencing 'AI Psychosis'

Mental health experts say identifying when someone is in need of help is the first step — and approaching them with careful compassion is the hardest, most essential part that follows.

404 Media

@karolherbst > I sometimes wonder how people could cope with the fact that they actually enslaved people

*chicken, cows and pigs entered the chat*

it's right there for you. apparently, it's absolutely normal to cope with an abuse of a sentient being by just shrugging it off as a natural order of things and choosing not to think about it any further