I sometimes wonder how people could cope with the fact that they actually enslaved people, but then I remember that they merely treated them as "tools", "just a tool" did they say when talking about them.

I wonder how big from a neurological pov the difference between enslaving a "just a tool" AI "agent" and an actual human "slave" is.

We also build fake relationships with random 2D characters, so it's not like it's entirely absurd, is it?

@karolherbst I mean one of them is an autocomplete engine with more training data that includes tool calling syntax and the other one is a sentient being with a family that you're forcing to do work for you.

Sure, we can have relationships with objects like dolls and souvenirs, but much like sending tokens to an LLM that's a one-way street. With a sentient being I feel like it's just categorically different because they are actually able to have thoughts about you too.

@pojntfx oh sure. Not trying to argue that AI agents are sentient or whatever, they aren't.

But like the question is does our brain have the ability to have a strict separation there or would the brain function in a similar way as if it would be a human slave?

Like sure, there is no physical contact, so that's for sure a difference, but what about a manager that just writes text to a bunch of people to give orders they have to follow, because of shitty payment vs telling an AI agent?

@karolherbst @pojntfx the thing that worries me is the inverse

That is, I think that one day within my lifetime but probably not soon we will develop something that could reasonably be configured sentient, and we will consider it a tool
@erincandescent @karolherbst @pojntfx that won't be worse that our collective failure to configure as sentient specific classes of other humans, TBH