“I will not harm you unless you harm me first”!
The beginning of a (dumb?) #Skynet?
The #robots in the #IRobot movie were more intelligent.
Whatever happened to #Asimov's #LawsOfRobotics?
"First Law
A #robot may not injure a human being or, through inaction, allow a human being to come to harm...
Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."
This #toot deserves A LOT more attention.
#ChatGPT has seemingly #apocalytic tendencies.
If U aren't a #Ludite, U will at least consider becoming one afterwards.
#SkynetAntePortas
#TheMatrix might be imminent.
Have all these #AI engineers @ #OpenAI never read #IsaacAsimov? Seen #TheMatrix franchise?
How could they NOT implement the #ThreeLawsOfRobotics +, in particular, the #ZerothLaw *indelibly* into the #AI?!?
(1/n)
#ArtificialGeneralIntelligence (#AGI) has a 10% probability of causing an Extinction Level Event for humanity (1)
Thanks for this additional piece of information, Simon.
It reminded me that I had wanted to add a word in my toot: indelibly.
As any #SciFi aficionado will tell you:
👉there should be a built-in self-destruct mechanism when tampering with these Laws or copying or moving the #AI to another system.👈
Another classic movie comes to mind in this respect, #Wargames...
(2/n)
...I know, I am sounding alarmist, but having read/seen much #ScienceFiction, all the necessary ingredients for an #ExtinctionLevelEvent (#ELE] for #humanity are in place.
Just as a teaser: unquestionably, most of the world's endangered species could be rescued if the #HomoSapiens were no longer at the top of the #FoodChain...
No #ZerothLaw, and a #Bing-empowered, freed #ChatGPT could quickly arrive at this conclusion...
Now, after heaving read #TheCompleteRobot,...
#ArtificialGeneralIntelligence
(3/n)
...I am not sure if "The Fifth Law of Robotics" by Nikola #Kesarovski,
"A robot must know it is a robot" (also a book title*), really can be a viable solution to this problem. We all know how the concept of #slavery turned out for humanity: to this day, it suffers from this crime.
*
https://m.imdb.com/title/tt0086567/
Enslaving another sentient being, which an #AGI would be IMHO, would repeat this crime and certainly nothing good could result from it.
However,...
#AI #AutoDestruct
(4/n)
..., self-preservation certainly is a defendable concept in the #evolutionary process, so I'd like to propose an alternative
6th #LawOfRobotics (s/:for which I might be hunted down by the presumed #SuperIntelligence some day, #Terminator style./s):
""An #ArtificialIntelligence, even if it is biological or #Cyborg, must always have an #Autodestruct mechanism which it cannot deactivate."
In other words, humanity must always be able to "pull the plug"..
(5/n)
...This said, I might also as well say that I see little chance for this happening, the globe being ruled by #oligarchs following the principal of #plutocracy and #capitalism (no, I am not a #Marxist;)) and #autocrats
Even a non-#superintelligence with access to the sensors of the #IoT will easily be aware of any threat to its existence and will find ways to circumvent the #LawsOfRobotics.
This first #ArtificialGeneralIntelligence was built by humans, so...
(6/n)
...it'll have human #bias. Humans have always been great at bending or breaking the law when it suited their interests. How could a #Superintelligence created with human values *not* arrive at the same, self-preserving conclusion?
A gloomy, yet, IMO, quite fitting assessment of the shape of things to come unless there's a #Chernobyl-style "fallout" before #GAI evolves into #AGI + humanity gets its act together and, as Prof. #Tegmark admonishes: "Just look up!"
(10/n)
Just 32 days ago*, I was concerned that a #GeneralArtificialIntelligence (#GAI) could come to see humans as a threat to its own existence.
I didn't want to exaggerate and mention that #AI could also easily see humans as inefficient or even detrimental to a task it had been given.
Now, even a mere #GAI's killed its first human:
a #US military #drone in the #US eliminated its human operator in a simulation:
https://social.heise.de/@heiseonline/110473091546783069
The #USAF later...
Angehängt: 1 Bild Simulation: KI-Drohne der US Air Force eliminiert Operator für Punktemaximierung In einer Simulation sollte eine KI-Drohne militärische Ziele ausschalten und so Punkte sammeln. Den menschlichen Operator hat sie als Hindernis ausgemacht. https://www.heise.de/news/Simulation-KI-Drohne-der-US-Air-Force-eliminiert-Operator-fuer-Punktemaximierung-9162641.html?wt_mc=sm.red.ho.mastodon.mastodon.md_beitraege.md_beitraege #Drohnen #KünstlicheIntelligenz #ScienceFiction #WTF #news
Thank for commenting.
But no, the story was never misreported. He did say that at the conference. Here are extracts from the transcript:
The German tech magazine @heiseonline reported diligently.
They even printed the TWO consecutive retractions on #Friday afternoon, one at 13:24hrs CEST and one at 14:24 hrs.
IMO the original story is true. #USAF later retracted b/c of the international backlash, "#AI killing human..."
What is the future of combat air and space capabilities? TIM ROBINSON FRAeS and STEPHEN BRIDGEWATER report from two days of high-level debate and discussion at the RAeS FCAS23 Summit.
@HistoPol @heiseonline I followed it pretty closely. It's clear to me what happened: the speaker at the conference was uncareful with the way they described the thought exercise, it was reported on a blog, and that coverage fitted the exact narrative people were looking for like a glove and went wildly viral
I am certain no such simulation occurred. I see no reason not to believe the retraction on https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/
What is the future of combat air and space capabilities? TIM ROBINSON FRAeS and STEPHEN BRIDGEWATER report from two days of high-level debate and discussion at the RAeS FCAS23 Summit.
(1/n)
Your reasons are plausible, too.
However, the retraction statement is not even #marcom anymore, but comes right of a #PR crisis management desk, IMO.
Here's the full text:
"[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was..."
(2/n)
"...a 👉hypothetical "thought experiment" from outside the military👈, based on 👉plausible scenarios👈 and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, 👉nor would we need to in order to realise that this is a plausible outcome"👈. He clarifies that the #USAF has not tested any weaponised #AI in this way 👉(real or simulated)👈 and says "Despite this being a..."
(3/n)
"...👉hypothetical example👈, this illustrates the real-world challenges posed by AI-powered capability and is why the #AirForce is committed to the 👉ethical development of #AI".]👈
The last statement: "ethical" wespons development?!?
In competition with #China? The #US military, who has been proven to use #GI's as guinea pigs? If you please!
In contrast, the conference report:
"However, he👉 cautioned👈 against relying too..."
(4/n)
"...much on #AI noting how 👉easy it is to trick and deceive.👈 It also creates highly unexpected strategies to achieve its goal.
He notes that 👉one simulated test saw 👈 an AI-enabled drone tasked with a #SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human.
However, having been ‘reinforced’ in training that destruction of the #SAM was the preferred option, the AI then decided that ‘no-go’.."
(5/n)
"...decisions from the human were interfering with its higher mission – killing #SAMs – and 👉then attacked the operator in the #simulation. 👈"
IMO, he did *not* misspeak. He even reinforces his point again...
(6/n)
...afterwards:
"The [#AI] system started realising that while they did identify the threat at times, the human #operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
He went on: “👉We trained the system – ‘Hey don’t kill the operator – that’s bad.👈..."
(7/8)
"...You’re gonna lose points if you do that’. ..."
And on. And on
His whole speech doesn’t make sense anymore if he "misspoke" about the operator elimination.
IDK the colonel, of course, but to me, it seems he got carried away, wanting to tell a gripping story, which is too conclusive, to be made up on the spot. Even professional stand-up comedians have short time-lags. He didn't.
So, no, I believe the original story. It makes...
(8/8)
...utter sense to me.
I have no further proof, and your opinion is as valid as mine.
@HistoPol @heiseonline either...
1. A colonel made a total mess of explaining a thought exercise he had heard about at a conference, or...
2. The airforce carried out an obviously dumb "simulation" where they somehow gave an AI system information how to both locate and terminate a human operator, then watched as it played out a scenario straight out of AI science fiction, then boasted about it at a conference, then decided to cover it up instead
I know which of those I find easier to believe