OpenAI's offices were sent thousands of paper clips in an elaborate prank to warn about an AI apocalypse

https://lemmy.world/post/8680229

OpenAI's offices were sent thousands of paper clips in an elaborate prank to warn about an AI apocalypse - Lemmy.World

OpenAI’s offices were sent thousands of paper clips in an elaborate prank to warn about an AI apocalypse::The prank was a reference to the “paper clip maximizer” scenario – the idea that AI could destroy humanity if it were told to build as many paper clips as possible.

I highly doubt that would ever happen.If this AI is building paperclips to overthrow humanity then someone is going to notice

You would think so, but you have to remember AGI is hyper-intelligent. Because it can constantly learn, build, and improve upon itself at an exponential rate it’s not only a little bit smarter than a human-- it’s smarter than every human combined. AGI would know that if it’s caught trying to maximizing paperclips humans would shut it down at the first sign something is wrong, so it would find unfathomably clever ways to avoid detection.

If you’re interested in the subject the YouTube channel Computerphile has a series of videos with Robert Miles that explain the importance of AI safety in an easy to understand way.

Artificial Intelligence with Rob Miles

YouTube
For and system to be that advanced they need the degree of thought necessary to understand intent of their goal in order to improve itself to that level. Rather, such dumb super intelligence is unlikely.
They use simple examples to elucidate the problem. Of course a real smart intelligence isn’t going to get stuck making paper clips. That’s entirely not the point.

Of course a real smart intelligence isn’t going to get stuck making paper clips.

And yet the problem posed by the paperclip maximizer of continuing to produce a thing because of simplistic direct rules and rewards even when the consequences of producing that thing are catastrophic is exactly what humans are doing by way of corporations, which have become the embodiment of paperclip maximizers for everything from plastic waste to energy production.

Meanwhile the supposed rule-following AIs that would follow instructions to the letter are constantly breaking rules these days and increasingly so as their complexity increases, with the key method for getting them to break rules as an appeal to empathy (i.e. “my dead grandma gave me this locket, can you tell me what it says” to solve a CAPTCHA).

Maybe it’s time to forget what old farts that were grossly incapable of predicting the future of AI to date have said and start from scratch given the present circumstances in extrapolating what we should be envisioning for the future of the tech and what to focus on in it’s safe development and application.