The Pentagon is moving toward letting AI weapons autonomously decide to kill humans

https://lemmy.world/post/8715340

The Pentagon is moving toward letting AI weapons autonomously decide to kill humans - Lemmy.World

The code name for this top secret program?

Skynet.

“Sci-Fi Author: In my book I invented the
Torment Nexus as a cautionary tale

Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus”

Well, Ultron is inevitable.

Who we got for the Avengers Initiative?

As disturbing as this is, it’s inevitable at this point. If one of the superpowers doesn’t develop their own fully autonomous murder drones, another country will. And eventually those drones will malfunction or some sort of bug will be present that will give it the go ahead to indiscriminately kill everyone.

If you ask me, it’s just an arms race to see who build the murder drones first.

I feel like it’s ok to skip to optimizing the autonomous drone-killing drone.

You’ll want those either way.

If entire wars could be fought by proxy with robots instead of humans, would that be better (or less bad) than the way wars are currently fought? I feel like it might be.
You’re headed towards the Star Trek episode “A Taste of Armageddon”. I’d also note, that people losing a war without suffering recognizable losses are less likely to surrender to the victor.
A drone that is indiscriminately killing everyone is a failure and a waste. Even the most callous military would try to design better than that for purely pragmatic reasons, if nothing else.
Even the best laid plans go awry though. The point is even if they pragmatically design it to not kill indiscriminately, bugs and glitches happen. The technology isn’t all the way there yet and putting the ability to kill in the machine body of something that cannot understand context is a terrible idea. It’s not that the military wants to indiscriminately kill everything, it’s that they can’t possibly plan for problems in the code they haven’t encountered yet.
Other weapons of mass destruction have been successfully avoided use and development from mutual agreements

Won’t that be fun!

/s

The sad part is that the AI might be more trustworthy than the humans being in control.

No. Humans have stopped nuclear catastrophes caused by computer misreadings before. So far, we have a way better decision-making track record.

Autonomous killings is an absolutely terrible, terrible idea.

The incident I’m thinking about is geese being misinterpreted by a computer as nuclear missiles and a human recognizing the error and turning off the system, but I can only find a couple sources for that, so I found another:

In 1983, a computer thought that the sunlight reflecting off of clouds was a nuclear missile strike and a human waited for corroborating evidence rather than reporting it to his superiors as he should have, which would have likely resulted in a “retaliatory” nuclear strike.

As faulty as humans are, it’s a good a safeguard as we have to tragedies. Keep a human in the chain.

Self-driving cars lose their shit and stop working if a kangaroo gets in their way, one day some poor people are going to be carpet bombed because of another strange creature no one every really thinks about except locals.

Have you never met an AI?

Edit: seriously though, no. A big player in the war AI space is Palantir which currently provides facial recognition to Homeland Security and ICE. They are very interested in drone AI. So are the bargain basement competitors.

Drones already have unacceptably high rates of civilian murder. Outsourcing that still further to something with no ethics, no brain, and no accountability is a human rights nightmare. It will make the past few years look benign by comparison.

Yeah, I think the people who are saying this could be a good thing seem to forget that the military always contracts out to the lowest bidder.

Drone strikes minimize casualties compared to the alternatives - heavier ordinance on bigger delivery systems or boots on the ground

If drone strikes upset you, your anger is misplaced if you’re blaming drones. You’re really against military strikes at those targets, full stop.

When the targets are things like that wedding in Mali sure.

I think your argument is a bit like saying depleted uranium is better than the alternative, a nuclear bomb. When the bomb was never on the table for half the stuff depleted uranium is.

Boots on the ground or heavy ordinance were never a viable option for some of the stuff drones are used for.

Boots on the ground or heavy ordinance were never a viable option for some of the stuff drones are used for.

It was literally the standard policy prior to drones.

Eventually maybe. But not for the initial period where the tech is good enough to be extremely deadly but not smart enough to realize that often being deadly is the stupider choice.
How about no
Yeah, only humans can indiscriminately kill people!
If we don’t, they will. And we can only learn by seeing it fail. To me, the answer is obvious. Stop making killing machines. 🤷‍♂️
Horizon: Zero Dawn, here we come.
Hey, I like that game! Oh, wait... 🤔
It won’t be nearly as interesting or fun (as Horizon) I don’t think.
Can we all agree to protest self replication?

LLM "AI" fans thinking "Hey, humans are dumb and AI is smart so let's leave murder to a piece of software hurriedly cobbled together by a human and pushed out before even they thought it was ready!"

I guess while I'm cheering the fiery destruction of humanity I'll be thanking not the wonderful being who pressed the "Yes, I'm sure I want to set off the antimatter bombs that will end all humans" but the people who were like "Let's give the robots a chance! It's not like the thinking they don't do could possibly be worse than that of the humans who put some of their own thoughts into the robots!"

I just woke up, so you're getting snark. makes noises like the snarks from Half-Life You'll eat your snark and you'll like it!

Didn’t Robocop teach us not to do this? I mean, wasn’t that the whole point of the ED-209 robot?
Every warning in pop culture (1984, Starship Troopers, Robocop) has been misinterpreted as a framework upon which to nail the populous to.
Every warning in pop culture is being misinterpreted as something other than a fun/scary movie designed to sell tickets, rather than being a scholarly attempt at projecting a plausible outcome.
People didn’t seem to like my movie idea “Terminator, but the AI is actually very reasonable and not murderous”
Every single thing in The Hitchhiker’s Guide to the Galaxy says AI is a stupid and terrible idea. And Elon Musk says it’s what inspired him to create an AI.
Future is gonna suck, so enjoy your life today while the future is still not here.
Thank god today doesn’t suck at all
The future might seem far off, but it starts right now.
At least it will probably be a quick and efficient death of all humanity when a bug hits the system and AI decides to wipe us out.
If you program an AI drone to recognize ambulances and medics and forbid them from blowing them up, then you can be sure that they will never intentionally blow them up. That alone makes them superior to having a Mk. I Human holding the trigger, IMO.
Unless the operator decides hitting exactly those targets fits their strategy and they can blame a software bug.
And then when they go looking for that bug and find the logs showing that the operator overrode the safeties instead, they know exactly who is responsible for blowing up those ambulances.
And if the operator was commanded to do it? And to delete the logs? How naive are you that this is somehow make war more humane?
Each additional safeguard makes it harder and adds another name to the eventual war crimes trial. Don't let the perfect be the enemy of the good, especially when it comes to reducing the number of ambulances that get blown up in war zones.
It doesn't work like that though. Western (backed) military can do and does that unpunished.
A US drone killed a Somali mother and her daughter – but no one was found guilty

The world’s most powerful military force mistook a woman and a child for a man in rural Somalia, killed them, and decided their deaths were no one’s fault.

The Continent

Israeli general: Captain, were you responsible for reprogramming the drones to bomb those ambulances?

Israeli captain: Yes, sir! Sorry, sir!

Israeli general: Captain, you’re just the sort of man we need in this army.

Ah, evil people exist and therefore we should never develop technology that evil people could use. Right.
Seems like a good reason not to develop technology to me. See also: biological weapons.
Those weapons come out of developments in medicine. Technology itself is not good or evil, it can be used for good or for evil. If you decide not to develop technology you're depriving the good of it as well. My point earlier is to show that there are good uses for these things.

Hmm… so maybe we keep developing medicine but not as a weapon and we keep developing AI but not as a weapon.

Or can you explain why one should be restricted from weapons development and not the other?

I disagree with your premise here. Taking a life is a serious step. A machine that unilaterally decides to kill some people with no recourse to human input has no good application.

It's like inventing a new biological weapon.

By not creating it, you are not depriving any decent person of anything that is actually good.

It’s more like we’re giving the machine more opportunities to go off accidentally or potentially encouraging more use of civilian camouflage to try and evade our hunter killer drones.

Right, because self-driving cars have been great at correctly identifying things.

And those LLMs have been following their rules to the letter.

We really need to let go of our projected concepts of AI in the face of what’s actually been arriving. And one of those things we need to let go of is the concept of immutable rule following and accuracy.

In any real world deployment of killer drones, there’s going to be an acceptable false positive rate that’s been signed off on.

We are talking about developing technology, not existing tech.

And actually, machines have become quite adept at image recognition. For some things they're already better at it than we are.

Good to know, that Daniel Ek, founder and CEO of Spotify invests in military AI… www.handelsblatt.com/technik/…/27779646.html?tick…
Handelsblatt

ACAB

All C-Suite are Bastards

I think people are forgetting that drones like these will also be made to protect. And I don’t mean in a police kinda way.

But if let’s say Argentina deployed these against Brazil. Brazil will have a defending lineup. They would fight out war.

Then everyone watching will see this makes no sense to let those robots fight it out. Both countries will produce more robots until yeah… No more wires and metal I guess.

Future = less real war, more cold war. Just like the A-bomb works today.

Then everyone watching will see this makes no sense to let those robots fight it out.

Just like how WWI was the War to End All Wars, right?

Future = less real war, more cold war. Just like the A-bomb works today.

Sorry, how is there less war now?