AI is programmed to hijack human empathy — we must resist that

https://lemmy.dbzer0.com/post/65661825

AI is programmed to hijack human empathy — we must resist that - Divisions by zero

> As artificial intelligence begins to mimic consciousness with uncanny skill, we need design norms and laws that prevent it from being mistaken for sentient beings.

Seemingly conscious AI is produced by developers who deliberately engineer behaviours that create the illusion of inner life.

False, the mimicry isnt engineered or even deliberate.

Its an outcome of the training process, LLMs are declarative in design, you create a rubric for success and then pure RNG eventually rolls the ball into the target.

You dont make it happen, you just let it randomly roll around until it hits the target by chance.

lol, first you say it’s not engineered, then you say you define what success during training looks like.

Once upon a time they made a program where its goal was to play Tetris and not lose. What solution did it come up with? Pause the game forever so it won’t lose. Once upon a time they told algorithms to keep retention and make sure people stay on their platforms as long as possible. Turns out the algorithms solution was promoting ragebait and hatred.

Just because you give it the answer and some instructions doesn’t mean it was designed to. The whole idea of machine learning is that it finds the solution itself in the “black box.” If anything, a lot of what we see on the consumer side are just emergent capabilities that were never even the initial intention but just happen to be useful.

Thats not directly engineered.

They arent themselves making it that way, they’re just selecting the ones that happen to be that way.

Its like genetic breeding.

If I breed chickens and specifically breed red ones so every generation I get redder and redder chickens, thats not “engineering” them to be red.

I just kept the ones that happened to be red for the next generation.

Thats not engineering, its not deterministic, its just pulling the slot machines handle enough times until it pays out.

If I say “Im gonna keep pulling this slot machines handle until I hit the jackpot”, then do that, hit the jackpot after 2000 pulls, and stop there, would you say I “engineered” the jackpot occurring just cuz I stopped and my last pull was on the jackpot outcome?

Maybe true for the first LLMs but at this point it’s intentional
No… thats not how training models… works…