@Mer__edith
A few things stand out to me here:
Firstly:
for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians; in the past, the military did not authorize any “collateral damage
This monstrous, there can be no excuse for this. Outright warcrime to intentionally allow for collateral damage like this. AI Compounds the issue as we'll see later but at it's core this is the result of not seeing a population as human, AI targeting or not.
Second: One source stated that human personnel often served only as a “rubber stamp” for the machine’s decisions, adding that, normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing
This right here is my biggest fear with AI making decisions. That AI will be trusted, and humans will rubber-stamp the result in the end. It is an abdication of responsibility, which when it is talking about rejecting a resume it is bad, but here again it is outright evil.
It is a really stark example of the biggest failure mode of AI, of humans just trusting, and punting responsibility whether it is for AI driving cars running into pedestrians, hiring systems rejecting resumes, or somehow, determining that someone has links to a certain organization and sending the bombs of a government that has fallen into fascism and decided that their enemies aren't even human.