Here is another case study in how anthropomorphization feeds AI hype --- and now AI doomerism.

https://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test

The headline starts it off with "Goes Rogue". That's a predicate that is used to describe people, not tools. (Also, I'm fairly sure no one actually died, but the headline could be clearer about that, too.)

>>

USAF Official Says He ‘Misspoke’ About AI Drone Killing Human Operator in Simulated Test

The Air Force's Chief of AI Test and Operations initially said an AI drone "killed the operator because that person was keeping it from accomplishing its objective."

The main type of anthropomorphization in this article is the use of predicates that take a "cognizer" argument with the mathy math (aka "AI" system) filling that role.

But the subhead has a slightly more subtle example:

In order to understand the relationship between the two clauses that is marked by 'because' as coherent, we need to imagine how the second causes the first.

>>

Given all of the rest of the hype around AI, the most accessible explanation is that the mathy math was 'frustrated' by the actions of the person and so turned on them.

But there's no reason to believe that. The article doesn't specify, but the simulation was probably of a reinforcement learning system -- systems developed through a conditioning set up where there is a specified goal a search space of possible steps towards it.

>>

@emilymbender This is an awesome time to remember that we don't know what consciousness is.