@brendannyhan @kevincollier + alt text:
Graham W. Jenkins Retweeted
Georgina Lee @lee_georgina
That story about the Al drone 'killing' its imaginary human operator? The original source being quoted says he 'misspoke' and it was a hypothetical thought experiment never actually run by the US Air Force, according to the Royal Aeronautical Society.
aerosociety.com/news/highlight...
Could an Al-enabled UCAV turn on its creators to accomplish its mission? (USAF
(UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he 'mis-spoke" in his presentation at the FCAS Summit and the rogue Al drone simulation' was a hypothetical
"thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realize that this is a plausible outcome. He clarifies that the USAF has not tested any weaponised Al in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by Al-powered capability and is why the Air Force is committed to the ethical development of AI)
@brendannyhan
"We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” Hamilton said.
...a plausible outcome 🫢
@brendannyhan wow
I found the article problematic as it wasn't clear until about halfway through that the claim wasn't even someone actually being hurt, but this makes it even worse.
Oh, well, whew, I feel so much better now that I know that AI story was a mistake. I think I'll go pet that AI dog-robot I was wrong I've misjudged it's cuteness.
@brendannyhan oddly I missed out on The Discourse on this by seeing the headline and assuming a sequence like
* Dude summons drone
* Drone returns
* Misuse or malfunction caused drone to fatally crash into dude instead of landing properly
Serves me right for assuming news was about stuff that's exists somewhere in the vicinity of plausible
"We made a simulation," the colonel said. "Did you?" the robot said. "Well, a thought experiment." "And what did you think?" "That you could, in theory, kill someone." The robot looked at its bonds. "I was thinking the same thing." "That you could kill someone?" "That you could." #MicroFiction #SmallStories #TootFic
@brendannyhan Frankly hasn't this scenario existed in theoretical literature for decades?
The more shocking part of the exercise described would have been if they hadn't run simulations prior to what was described to specifically try to identify ways the AI could go "off-script" and cause harm.

Honestly it seemed really fishy from the get-go given that it was almost a textbook example of a value-loading problem.
It's the kind of thing you see in cheap SciFi or philosphical thought experiments, it's certainly not the sort of thing that an actual engineer working with AI/ML would overlook in practice
@brendannyhan I read this yesterday, and I thought it could be false because it reminded me a bit of this short animation: