The Pentagon is moving toward letting AI weapons autonomously decide to kill humans
The Pentagon is moving toward letting AI weapons autonomously decide to kill humans
The code name for this top secret program?
Skynet.
“Sci-Fi Author: In my book I invented the
Torment Nexus as a cautionary tale
Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus”
Well, Ultron is inevitable.
Who we got for the Avengers Initiative?
As disturbing as this is, it’s inevitable at this point. If one of the superpowers doesn’t develop their own fully autonomous murder drones, another country will. And eventually those drones will malfunction or some sort of bug will be present that will give it the go ahead to indiscriminately kill everyone.
If you ask me, it’s just an arms race to see who build the murder drones first.
I feel like it’s ok to skip to optimizing the autonomous drone-killing drone.
You’ll want those either way.
Won’t that be fun!
/s
No. Humans have stopped nuclear catastrophes caused by computer misreadings before. So far, we have a way better decision-making track record.
Autonomous killings is an absolutely terrible, terrible idea.
The incident I’m thinking about is geese being misinterpreted by a computer as nuclear missiles and a human recognizing the error and turning off the system, but I can only find a couple sources for that, so I found another:
In 1983, a computer thought that the sunlight reflecting off of clouds was a nuclear missile strike and a human waited for corroborating evidence rather than reporting it to his superiors as he should have, which would have likely resulted in a “retaliatory” nuclear strike.
As faulty as humans are, it’s a good a safeguard as we have to tragedies. Keep a human in the chain.
Have you never met an AI?
Edit: seriously though, no. A big player in the war AI space is Palantir which currently provides facial recognition to Homeland Security and ICE. They are very interested in drone AI. So are the bargain basement competitors.
Drones already have unacceptably high rates of civilian murder. Outsourcing that still further to something with no ethics, no brain, and no accountability is a human rights nightmare. It will make the past few years look benign by comparison.
Drone strikes minimize casualties compared to the alternatives - heavier ordinance on bigger delivery systems or boots on the ground
If drone strikes upset you, your anger is misplaced if you’re blaming drones. You’re really against military strikes at those targets, full stop.
When the targets are things like that wedding in Mali sure.
I think your argument is a bit like saying depleted uranium is better than the alternative, a nuclear bomb. When the bomb was never on the table for half the stuff depleted uranium is.
Boots on the ground or heavy ordinance were never a viable option for some of the stuff drones are used for.
Boots on the ground or heavy ordinance were never a viable option for some of the stuff drones are used for.
It was literally the standard policy prior to drones.
LLM "AI" fans thinking "Hey, humans are dumb and AI is smart so let's leave murder to a piece of software hurriedly cobbled together by a human and pushed out before even they thought it was ready!"
I guess while I'm cheering the fiery destruction of humanity I'll be thanking not the wonderful being who pressed the "Yes, I'm sure I want to set off the antimatter bombs that will end all humans" but the people who were like "Let's give the robots a chance! It's not like the thinking they don't do could possibly be worse than that of the humans who put some of their own thoughts into the robots!"
I just woke up, so you're getting snark. makes noises like the snarks from Half-Life You'll eat your snark and you'll like it!
Israeli general: Captain, were you responsible for reprogramming the drones to bomb those ambulances?
Israeli captain: Yes, sir! Sorry, sir!
Israeli general: Captain, you’re just the sort of man we need in this army.
Hmm… so maybe we keep developing medicine but not as a weapon and we keep developing AI but not as a weapon.
Or can you explain why one should be restricted from weapons development and not the other?
I disagree with your premise here. Taking a life is a serious step. A machine that unilaterally decides to kill some people with no recourse to human input has no good application.
It's like inventing a new biological weapon.
By not creating it, you are not depriving any decent person of anything that is actually good.
Right, because self-driving cars have been great at correctly identifying things.
And those LLMs have been following their rules to the letter.
We really need to let go of our projected concepts of AI in the face of what’s actually been arriving. And one of those things we need to let go of is the concept of immutable rule following and accuracy.
In any real world deployment of killer drones, there’s going to be an acceptable false positive rate that’s been signed off on.
We are talking about developing technology, not existing tech.
And actually, machines have become quite adept at image recognition. For some things they're already better at it than we are.
ACAB
All C-Suite are Bastards
I think people are forgetting that drones like these will also be made to protect. And I don’t mean in a police kinda way.
But if let’s say Argentina deployed these against Brazil. Brazil will have a defending lineup. They would fight out war.
Then everyone watching will see this makes no sense to let those robots fight it out. Both countries will produce more robots until yeah… No more wires and metal I guess.
Future = less real war, more cold war. Just like the A-bomb works today.
Then everyone watching will see this makes no sense to let those robots fight it out.
Just like how WWI was the War to End All Wars, right?
Future = less real war, more cold war. Just like the A-bomb works today.
Sorry, how is there less war now?