I sometimes wonder how people could cope with the fact that they actually enslaved people, but then I remember that they merely treated them as "tools", "just a tool" did they say when talking about them.

I wonder how big from a neurological pov the difference between enslaving a "just a tool" AI "agent" and an actual human "slave" is.

We also build fake relationships with random 2D characters, so it's not like it's entirely absurd, is it?

@karolherbst I mean one of them is an autocomplete engine with more training data that includes tool calling syntax and the other one is a sentient being with a family that you're forcing to do work for you.

Sure, we can have relationships with objects like dolls and souvenirs, but much like sending tokens to an LLM that's a one-way street. With a sentient being I feel like it's just categorically different because they are actually able to have thoughts about you too.

@pojntfx oh sure. Not trying to argue that AI agents are sentient or whatever, they aren't.

But like the question is does our brain have the ability to have a strict separation there or would the brain function in a similar way as if it would be a human slave?

Like sure, there is no physical contact, so that's for sure a difference, but what about a manager that just writes text to a bunch of people to give orders they have to follow, because of shitty payment vs telling an AI agent?

@pojntfx Does that make the brain function differently in any way?

Like I don't know the answer to that, it would certainly be interesting to figure that out.

@karolherbst Oh, thanks for elaborating, hmm yeah that is a very good question. I'm ngl I've seen quite a few people at least use language in prompts that is effectively the same that they give to coworkers when they ask them to complete a ticket ...

I wonder if it might have an effect on what people think about slavery, too? I've heard from more than one person here in Vancouver now that "LLMs show that people are totally OK with having a personal slave" and I was just standing there like "tf"

@karolherbst I hope that there will be research on this at some point because I could find absolutely nothing about the topic so far myself

@pojntfx yeah.. like the reason I'm not using the agents isn't because some rational "this is bad", but rather that I'm totally weirded out by the thought alone. Like feels wrong on a deeply fundamental level. Of course I can rationalize those feelings, but still...

I would feel equally weirded out about having anybody work beneath me with a significant power balance and not eye to eye.