And warnings like this should be the reason to ensure anything with AI has a physical switch to press. Unfortunately, we are instead putting AI into combat drones with lethal weapons.
#AI #warnings
https://www.marketwatch.com/story/will-ai-start-going-rogue-the-chorus-of-warnings-is-getting-louder-c4d4b831
@kdkorte AI was the reason the girls' school was targeted in Iran. It probably wasn't the only one. Besides, these regions have been testing grounds for new weapons. Who cares about humans?

@AAKL the sad part is, for the girls school, there were humans in the loop. Just, no one cared enough to check whether the AI was correct or had all the data.

I thought the movie "Eye in the Sky" showed, how bad it can get on our side. Yet, somehow in the movie even the coldest people still judged between multiple bad outcomes, instead of being just careless.

@kdkorte Since it was the U.S. that did this, there are a few options:

1 - The US screwed up
2 - Palantir and OpenAI are incompetent
3 - The US was deliberately fed false data (it wouldn't be the first time)

I say this because destroying schools and civilian support is standard practice in the endless wars that are waged on Middle Eastern countries. It's born of the idea that you should send anyone who isn't you back into the dark ages because you have a right to exist and they don't. And, of course, this particular US regime is more than happy to be be led by the nose.

@AAKL we know it's 1) + 2)
1) The system ran on data that was horrible out of date
2) the AI which was supposed to flag it didn't.
1)+2) no one tested, whether the AI actually would flag out of date data in every case
@kdkorte That sounds about right. I'm not sure it's an accomplishment to add to the compounding failures of AI models that help you automate (yay) while hollowing out the ground beneath you.