Front Burner is a daily CBC news podcast that generally focuses on stories relevant to Canada. Today they had Heidy Khlaaf, ex of OpenAI and Trail of Bits and currently at the AI Now Institute, talking about the application of AI to autonomous weapon systems. As you might guess, t's not a pretty story. But it's actually worse than I had imagined. #CanadaPol #AI
https://www.cbc.ca/listen/cbc-podcasts/209-front-burner/episode/16201533-iran-and-ai-on-the-battlefield
Iran and AI on the battlefield | Front Burner | CBC Podcasts | CBC Listen

For decades we have been hearing about the possibility of AI-driven warfare, and now it’s here. Anthropic's AI platform Claude has been reportedly central to the U.S.-Israeli war on Iran. It was used during the attack that killed Iranian Supreme Leader Ayatollah Ali Khamenei, which involved strikes on nearly 900 targets dropped within the first 12 hours, including on a girls’ elementary school that killed at least 165 people – mostly students. Today we’re talking about AI military capabilities: how companies like Anthropic and OpenAI are working with the military, and what happens when these companies and governments start building systems that help decide who lives and who dies in a war. Heidy Khlaaf, the Chief AI Scientist at the AI Now Institute and an expert on AI safety within defense and national security, joins the show.

CBC Listen
Whenever we read a story about somebody being arrested and charged with a crime with no more evidence than a hit on a facial recognition algorithm - seems like there are more of these every day - we wonder how law enforcement could be so... sloppy. But it's a symptom of a well-known - "automation bias" - tendency of people to give more credence to information that is excreted by an algorithm than from a human source. Further, it adds distance from decisions.
When Canadian AI pioneer Geoff Hinton stepped back from his role at Google, he said that it was to be free to speak openly about the risks of deploying the technology inappropriately, pointing to autonomous weapons systems as the most fraught example. It's happening before our eyes, being directed by the same geniuses that 18 months ago were talking heads on cable news and who have done nothing since to convince us that we under-judged them.
Meanwhile, we have in USA an administration that seems to have difficulty distinguishing between enemies and citizenry - they employ the same language to refer to actual terrorist organizations and anybody who doesn't agree with every detail of their agenda. Some of these weapons are pointing at us, and with algorithms on the triggers.
To be clear, I, for one, welcome our new algorithmic overlords.
Even if the killer robots don't incinerate the world, what world will it be where even more power and resources are concentrated in the hands of the Thiels, Musks, Trumps? We will all be living under tarps on sidewalks before this is over.