โI will not harm you unless you harm me firstโ!
The beginning of a (dumb?) #Skynet?
The #robots in the #IRobot movie were more intelligent.
Whatever happened to #Asimov's #LawsOfRobotics?
"First Law
A #robot may not injure a human being or, through inaction, allow a human being to come to harm...
Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."
This #toot deserves A LOT more attention.
#ChatGPT has seemingly #apocalytic tendencies.
If U aren't a #Ludite, U will at least consider becoming one afterwards.
#SkynetAntePortas
#TheMatrix might be imminent.
Have all these #AI engineers @ #OpenAI never read #IsaacAsimov? Seen #TheMatrix franchise?
How could they NOT implement the #ThreeLawsOfRobotics +, in particular, the #ZerothLaw *indelibly* into the #AI?!?
(1/n)
#ArtificialGeneralIntelligence (#AGI) has a 10% probability of causing an Extinction Level Event for humanity (1)
Thanks for this additional piece of information, Simon.
It reminded me that I had wanted to add a word in my toot: indelibly.
As any #SciFi aficionado will tell you:
๐there should be a built-in self-destruct mechanism when tampering with these Laws or copying or moving the #AI to another system.๐
Another classic movie comes to mind in this respect, #Wargames...
(2/n)
...I know, I am sounding alarmist, but having read/seen much #ScienceFiction, all the necessary ingredients for an #ExtinctionLevelEvent (#ELE] for #humanity are in place.
Just as a teaser: unquestionably, most of the world's endangered species could be rescued if the #HomoSapiens were no longer at the top of the #FoodChain...
No #ZerothLaw, and a #Bing-empowered, freed #ChatGPT could quickly arrive at this conclusion...
Now, after heaving read #TheCompleteRobot,...
#ArtificialGeneralIntelligence
(3/n)
...I am not sure if "The Fifth Law of Robotics" by Nikola #Kesarovski,
"A robot must know it is a robot" (also a book title*), really can be a viable solution to this problem. We all know how the concept of #slavery turned out for humanity: to this day, it suffers from this crime.
*
https://m.imdb.com/title/tt0086567/
Enslaving another sentient being, which an #AGI would be IMHO, would repeat this crime and certainly nothing good could result from it.
However,...
#AI #AutoDestruct
(4/n)
..., self-preservation certainly is a defendable concept in the #evolutionary process, so I'd like to propose an alternative
6th #LawOfRobotics (s/:for which I might be hunted down by the presumed #SuperIntelligence some day, #Terminator style./s):
""An #ArtificialIntelligence, even if it is biological or #Cyborg, must always have an #Autodestruct mechanism which it cannot deactivate."
In other words, humanity must always be able to "pull the plug"..
(5/n)
...This said, I might also as well say that I see little chance for this happening, the globe being ruled by #oligarchs following the principal of #plutocracy and #capitalism (no, I am not a #Marxist;)) and #autocrats
Even a non-#superintelligence with access to the sensors of the #IoT will easily be aware of any threat to its existence and will find ways to circumvent the #LawsOfRobotics.
This first #ArtificialGeneralIntelligence was built by humans, so...
(6/n)
...it'll have human #bias. Humans have always been great at bending or breaking the law when it suited their interests. How could a #Superintelligence created with human values *not* arrive at the same, self-preserving conclusion?
A gloomy, yet, IMO, quite fitting assessment of the shape of things to come unless there's a #Chernobyl-style "fallout" before #GAI evolves into #AGI + humanity gets its act together and, as Prof. #Tegmark admonishes: "Just look up!"
As always, you are spot on
I did not know the game but, yes, some speedreading helped, you are right, except for two things:
1) the #KIRevolution will not be humerous but rather like #PhilipKDick's prescient #SecondVariety * setting and
2) the "Computer's arbitrary, contradictory and often nonsensical security directives" are a non-issue. They will be like moves in an n-dimensional chess game, being (re)calculated with super-human speed...
..."IntelligenceMachine" from #Paranoia, and the #Terminators were no #SuperIntelligences, as we are prone to find out to or dismay.
https://en.m.wikipedia.org/wiki/Paranoia_(role-playing_game)
@voron
(7/n)
It seems, I'm getting more prominent support by the day:
"I donโt think [researchers] should scale this up more until they have understood whether they can control it.โ
Thatโs according to Dr. Geoffrey Hinton, a pioneer in the world of #AI who just resigned from Google so he can "speak freely."
His long-term worry is that future AI systems could threaten humanity as they learn unexpected behavior from vast amounts of data."
s/:"Surprise!"/s
(8/n)
Humanity continues on the path to create #SuperAndroids
"Recent research has taken this approach, training language models [#LLM's] to generate physics simulations, interact with physical environments and even generate #robotic action plans.
Embodied language understanding might still be a long way off, but these kinds of multisensory interactive projects are crucial steps on the way there."
HUMANS ARE STUPID
Large language models canโt understand language the way humans do because they canโt perceive and make sense of the world. By Arthur Glenberg, Emeritus Professor of Psychology, Arizona Stateโฆ
(9/n)
PS:
(1)
My source for the 10% probability quote for #AI causing human extinction:
Please note the date: Summer of 2022, way before #OpenAI provided internet access to more than a million users in November 2022:
https://increditools.com/chatgpt-statistics/
PLEASE NOTE
The Artificial General Intelligence Thread continues here, not further down in the longer convo:
(10/n)
Just 32 days ago*, I was concerned that a #GeneralArtificialIntelligence (#GAI) could come to see humans as a threat to its own existence.
I didn't want to exaggerate and mention that #AI could also easily see humans as inefficient or even detrimental to a task it had been given.
Now, even a mere #GAI's killed its first human:
a #US military #drone in the #US eliminated its human operator in a simulation:
https://social.heise.de/@heiseonline/110473091546783069
The #USAF later...
Angehรคngt: 1 Bild Simulation: KI-Drohne der US Air Force eliminiert Operator fรผr Punktemaximierung In einer Simulation sollte eine KI-Drohne militรคrische Ziele ausschalten und so Punkte sammeln. Den menschlichen Operator hat sie als Hindernis ausgemacht. https://www.heise.de/news/Simulation-KI-Drohne-der-US-Air-Force-eliminiert-Operator-fuer-Punktemaximierung-9162641.html?wt_mc=sm.red.ho.mastodon.mastodon.md_beitraege.md_beitraege #Drohnen #KรผnstlicheIntelligenz #ScienceFiction #WTF #news
Thank for commenting.
But no, the story was never misreported. He did say that at the conference. Here are extracts from the transcript:
The German tech magazine @heiseonline reported diligently.
They even printed the TWO consecutive retractions on #Friday afternoon, one at 13:24hrs CEST and one at 14:24 hrs.
IMO the original story is true. #USAF later retracted b/c of the international backlash, "#AI killing human..."
What is the future of combat air and space capabilities? TIM ROBINSON FRAeS and STEPHEN BRIDGEWATER report from two days of high-level debate and discussion at the RAeS FCAS23 Summit.
@HistoPol @heiseonline I followed it pretty closely. It's clear to me what happened: the speaker at the conference was uncareful with the way they described the thought exercise, it was reported on a blog, and that coverage fitted the exact narrative people were looking for like a glove and went wildly viral
I am certain no such simulation occurred. I see no reason not to believe the retraction on https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/
What is the future of combat air and space capabilities? TIM ROBINSON FRAeS and STEPHEN BRIDGEWATER report from two days of high-level debate and discussion at the RAeS FCAS23 Summit.
(1/n)
Your reasons are plausible, too.
However, the retraction statement is not even #marcom anymore, but comes right of a #PR crisis management desk, IMO.
Here's the full text:
"[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was..."
(2/n)
"...a ๐hypothetical "thought experiment" from outside the military๐, based on ๐plausible scenarios๐ and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, ๐nor would we need to in order to realise that this is a plausible outcome"๐. He clarifies that the #USAF has not tested any weaponised #AI in this way ๐(real or simulated)๐ and says "Despite this being a..."
(3/n)
"...๐hypothetical example๐, this illustrates the real-world challenges posed by AI-powered capability and is why the #AirForce is committed to the ๐ethical development of #AI".]๐
The last statement: "ethical" wespons development?!?
In competition with #China? The #US military, who has been proven to use #GI's as guinea pigs? If you please!
In contrast, the conference report:
"However, he๐ cautioned๐ against relying too..."
(4/n)
"...much on #AI noting how ๐easy it is to trick and deceive.๐ It also creates highly unexpected strategies to achieve its goal.
He notes that ๐one simulated test saw ๐ an AI-enabled drone tasked with a #SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human.
However, having been โreinforcedโ in training that destruction of the #SAM was the preferred option, the AI then decided that โno-goโ.."
(5/n)
"...decisions from the human were interfering with its higher mission โ killing #SAMs โ and ๐then attacked the operator in the #simulation. ๐"
IMO, he did *not* misspeak. He even reinforces his point again...
(6/n)
...afterwards:
"The [#AI] system started realising that while they did identify the threat at times, the human #operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.โ
He went on: โ๐We trained the system โ โHey donโt kill the operator โ thatโs bad.๐..."
(7/8)
"...Youโre gonna lose points if you do thatโ. ..."
And on. And on
His whole speech doesnโt make sense anymore if he "misspoke" about the operator elimination.
IDK the colonel, of course, but to me, it seems he got carried away, wanting to tell a gripping story, which is too conclusive, to be made up on the spot. Even professional stand-up comedians have short time-lags. He didn't.
So, no, I believe the original story. It makes...
(8/8)
...utter sense to me.
I have no further proof, and your opinion is as valid as mine.
@HistoPol @heiseonline either...
1. A colonel made a total mess of explaining a thought exercise he had heard about at a conference, or...
2. The airforce carried out an obviously dumb "simulation" where they somehow gave an AI system information how to both locate and terminate a human operator, then watched as it played out a scenario straight out of AI science fiction, then boasted about it at a conference, then decided to cover it up instead
I know which of those I find easier to believe
@simon @HistoPol @heiseonline I don't know, seems pretty clear and precise language to me.
He says "simulation" several times. He says "we were training it on" etc. There's a verbatim quote at the bottom with his exact language.
I see plenty of reason to not believe their retrospective PR crisis denial...
and also to question your own obvious desire to accept the denial because it "fits the narrative" you clearly are desperate to believe.
I'd agree, if I didn't know about this even worse example beforehand:
#PeterThiel's #AIP:
https://mastodon.social/@HistoPol/110323739545391429
In fact, if it had been an integrated RL test with the #AIP General and the #drone, the conference version would make even more sense.
NOBODY must learn about its current development status.
@HistoPol this is exactly my problem: I think we should all be deeply concerned about Palantir - and this Scale AI thing too: https://scale.com/blog/scale-ceo-letter-donovan-egp
But that means we need to resist spreading stories that are clearly misinformation (accidental or otherwise) because spreading those both distracts from the genuine issues and costs us in terms of credibility
For me, both versions remain equally(!) plausible for the time being. The #AIP is yet another strong case in point, apart from semantics (have worked in #marcom, would've called in "the extraction team" for this major blunder.)
As long as there are no new pieces of information, I use a #Japanese strategy: compartmentalize.
I do agree that the focus should be on the rather indisputable issues.
We do not need a consensus at this time.
Scenarios are good enough, IMHO.
@simon @HistoPol @heiseonline I'm saying that's the accusation you made of others, of motivated reasoning/believing, but it seems more fitting to turn it around.
The denial is implausible nonsense on its face.
He didn't mis speak and wasn't misquoted. The verbatim quote is right there, and clearly demonstrates what he was talking about.
He was not describing a 'thought experiment'.
Accepting this denial seems impossible for anybody who isn't motivated to believe it to be true.
Is my point.
@simon @HistoPol @heiseonline I think its perfectly possible he completely made this up, trying to impress people with a provocative anecdote about a simulation that never really happened.
But that's completely different from saying he was misquoted, wasn't really describing a simulation, just airing a thought experiment.
He is explicitly referring to a computerized sim with training data, point scores etc
Pretending otherwise is gaslighting.
@mattlav1250 @HistoPol @heiseonline I didn't say he was misquoted - I said the situation was misreported
I should have been more specific about that, but what I meant is that press outlets were irresponsible in spreading a story that later turned out to not stand up to deeper inspection
@simon @HistoPol @heiseonline And I'M saying it was NOT misreported..
There is a verbatim quote taken down by the journalists who were present, and its clear and unambiguous what he was claiming, and that quote matches the way it was reported.
The press were not irresponsible, they quoted an on-the-record pentagon employee delivering an official talk at a public conference.
If that official was lying, how would they know that?
Again, while possibly invented, this is not a description of a thought experiment.
Agreed. I really wish it had been, though.
It'd be great to know if #PeterThiel was involved.
Was he at the conference?
What was the handle again of the guy that tracks billionaires' jets?
Any idea how to find out if #PeterThiel was at this huge global "#DefenseIndustry" conference?*
Is there something like @elonjet for #Thiel?
He is even more dangerous than #Musk owning #Palantir and #AIP.
*
"RAeS Future Combat Air & Space Capabilities Summit" hosted by the Royal Aeronautical Society in #London on 23-24 May, 2023:
@mattlav1250 @HistoPol @heiseonline "If that official was lying, how would they know that?"
By that argument, reporters who repeat "facts" provided to them by police officers are being responsible - and we know how often that goes wrong (especially in the USA)
Part of the job of journalism is spotting when a story looks too good to be true and digging further
@simon @HistoPol @heiseonline Thats not a remotely similar situation, as I'm sure you're aware.
One is an example of incidents in which there are two or more parties, and one has an obvious incentive to lie or spin their own side, and its irresponsible to help them do so.
The other is a public description by a senior military officer about an exercise he was involved in, with no obvious reason to spin other than vanity.
@simon @HistoPol @heiseonline Also, I just don't know what you're expecting specifically when you say journalists should have 'checked'.
WITH WHOM?
HE'S A PRIMARY SOURCE!
Should they personally raid the Pentagon Secret Simulations Archives for documentary evidence, before they quote a military official's speech at a public event?
@mattlav1250 @simon @heiseonline
(1/4)
Of course not. This was a report from a conference. No investigative journalism. All sources were named. Updates and retractions were published at an astonishing(!) speed.
What he said was not out of the clear blue sky, given the exponential development path of #ChatGPT since last year.
In essence, there definitely was no "misreporting."
Now, that the colonel definitely "misspoke" is something that is...
@mattlav1250 @simon @heiseonline
(2/4)
...clear to me, too.
The question is, about what: the facts (i. e. only a "thought experiment" or he got carried away and gave away military ๐ช secrets that he shouldn't have talked about or #USAF had given clearance but chose to retract as the lesser evil, in face of the public backlash.)
We might never know.
What we DO know is that someone already did recreate the "thought experiment" w/ #ChatGPT...
@mattlav1250 @simon @heiseonline
(3/4)
Source is some RobertGarrity, who comments:
"Itโs very plausible. This was the result with GPT-4 after bypassing its safeguards."
https://twitter.com/GarrittyOf/status/1664420719529279488?s=19
While #ChatGPT's suggestions do not include an attack on the operator (it is no military #AI after all), it clearly shows massive evidence of ideas ignoring commands.
It is evidence that supports my hypothesis. #AI's can lie to its operators even to...
"I didn't say he was misquoted - I said the situation was misreported"
On this point, we disagree. My sources added updates, and new information emerged. No "misreporting".
IDK if you saw my detailed analysis:
https://mastodon.social/@HistoPol/110477455101993825
But then, this is only a fraction of news outlets.
(11/n)
...corrected the colonel's rather detailed account twice, eventually claiming that it had only been a "thought experiment." This, however, seems still unlikely to me even after the long discussion we had below.
[This ongoing #AGI thread continues with 12/n with a completely new approach about educating #AI:
https://mastodon.social/@HistoPol/110485144403488719]
Link for (11/n):
(11/n)
It is time to continue with the original #AIThread.
In the past couple of says, I took a brief look at #AIRegulation initiatives, in particular, the #EU's #AIAct:
(10/n (Part 2))
...later published two consecutive very (aka too) professional press releases trying to downplay the (IMO) #FreudianSlip incident as mere "thought experiments," which I found rather hard to believe. (If you are interested in a detailed discussion, scroll down.)
It is more important, however, to continue the #ArtificialIntelligence and what lessons can be learned from the #SciFi subgenre of negative #utopias thread here: