Here is another case study in how anthropomorphization feeds AI hype --- and now AI doomerism.

https://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test

The headline starts it off with "Goes Rogue". That's a predicate that is used to describe people, not tools. (Also, I'm fairly sure no one actually died, but the headline could be clearer about that, too.)

>>

USAF Official Says He ‘Misspoke’ About AI Drone Killing Human Operator in Simulated Test

The Air Force's Chief of AI Test and Operations initially said an AI drone "killed the operator because that person was keeping it from accomplishing its objective."

The main type of anthropomorphization in this article is the use of predicates that take a "cognizer" argument with the mathy math (aka "AI" system) filling that role.

But the subhead has a slightly more subtle example:

In order to understand the relationship between the two clauses that is marked by 'because' as coherent, we need to imagine how the second causes the first.

>>

Given all of the rest of the hype around AI, the most accessible explanation is that the mathy math was 'frustrated' by the actions of the person and so turned on them.

But there's no reason to believe that. The article doesn't specify, but the simulation was probably of a reinforcement learning system -- systems developed through a conditioning set up where there is a specified goal a search space of possible steps towards it.

>>

For 'kill the operator' to be a possible step, that would have had to be programmed into the simulation (or at minimum, the information of what happens if the operator is killed).

>>

Another bit of anthropomorphizing is in this quote -- the verb 'realize' requires that the realizer be the kind of entity that can apprehend the truth of a proposition.

To be very clear, I think that autonomous weapons are a Bad Thing. But I also think that reporting about them should clearly describe them as tools rather than thinking/feeling entities. Tools we should not build and should not deploy.

>>

Given all that context, it's not surprising that the article references the paperclip maximizer thought experiment from the odious Bostrom.

Even that is best interpreted as a cautionary tale about what we use automation to do, rather than "rogue AI".

@emilymbender The one good thing that came out of the paperclip thing was @cstross using it to show that we already have that type of maximizer - corporations, whih he said are "old, slow AIs."
@wendyg @emilymbender @cstross Even that one didn't need Bostrom. It makes the same argument as Basil Johnson's 1991 article on the subject.
@emilymbender infinitely close to being almost human sounds better than a failing expensive project that nobody know how it works.
@emilymbender tangentially related, I am getting annoyed at all the people with ML degrees being dismissive of laymen trying to discuss AI alignment like it's more than a philosophical problem at present.