Here is another case study in how anthropomorphization feeds AI hype --- and now AI doomerism.

https://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test

The headline starts it off with "Goes Rogue". That's a predicate that is used to describe people, not tools. (Also, I'm fairly sure no one actually died, but the headline could be clearer about that, too.)

>>

USAF Official Says He ‘Misspoke’ About AI Drone Killing Human Operator in Simulated Test

The Air Force's Chief of AI Test and Operations initially said an AI drone "killed the operator because that person was keeping it from accomplishing its objective."

The main type of anthropomorphization in this article is the use of predicates that take a "cognizer" argument with the mathy math (aka "AI" system) filling that role.

But the subhead has a slightly more subtle example:

In order to understand the relationship between the two clauses that is marked by 'because' as coherent, we need to imagine how the second causes the first.

>>

Given all of the rest of the hype around AI, the most accessible explanation is that the mathy math was 'frustrated' by the actions of the person and so turned on them.

But there's no reason to believe that. The article doesn't specify, but the simulation was probably of a reinforcement learning system -- systems developed through a conditioning set up where there is a specified goal a search space of possible steps towards it.

>>

For 'kill the operator' to be a possible step, that would have had to be programmed into the simulation (or at minimum, the information of what happens if the operator is killed).

>>

Another bit of anthropomorphizing is in this quote -- the verb 'realize' requires that the realizer be the kind of entity that can apprehend the truth of a proposition.

To be very clear, I think that autonomous weapons are a Bad Thing. But I also think that reporting about them should clearly describe them as tools rather than thinking/feeling entities. Tools we should not build and should not deploy.

>>

@emilymbender It is anthropomorphizing, and that can get in the way - these are not people. But if the mental model - mental model not truth - that best fits the way these things act is that of an entity that apprehend the truth of a proposition, it seems appropriate to adopt that model.
@Scifiguy programmers were running a program which contained significant errors in a simulation environment. The fact that it was a program built using an inference engine, a training data set and weighted performance metrics driving a feedback loop makes it more complex than a text editor, not qualitatively different. They repeatedly made assumptions that were wrong so the code repeatedly did not perform as desired. Shit coding, not the AI apocalypse. Anthropomorphizing code clouds the issue.
@pa28 Fair to say that I am out of my depth discussing the internal workings of one of these things. Allow me to speak from that ignorance: A lot of fairly smart people I know are still more at sea. That is what clouds the issue for us, way ahead of anthropomorphizing and, for /some/ circumstances, the model that works best for us to interact with these things might be to treat it like a person - for the same accessibility reasons we teach Newtonian, not relativistic, mechanics at school.
@Scifiguy classical vs quantum mechanics may be a better comparison except AI has not had the equivalent of the quantum revolution. I've worked with AI since the early 90s and while the hardware supports orders of magnitude more complex math and data set size, the technology has benefited from advances in other areas but hasn't had a seminal discovery. We had ELIZA in 1964-66. In 1969 I wrote a program that "learned" how to win at tic-tac-toe. .../2

@Scifiguy People would actually have conversations with ELIZA and ascribe it human feelings which confounded Joseph Weizenbaum, who created ELIZA. Academics at the time believed ELIZA could help people with psychiatric problems.

My work in the 90s would often be thought to know the correct answers even though it was no more that a fairly simple Bayesian Classifier.

The human mind is good at finding patterns, images and agency where they do and also where they do not exists.

2/2

@pa28 ...so as long as it is understood that the 'thinking creature' model is just a model, and has narrow limits of usefulness, I think it needs to be gone along with - and so I think the use of the word 'realized' can be justified on the authors part, as could other (mild) anthropomorphising.
@Scifiguy I suppose. But what justification is there for human intelligence to be the simplistic & intuitive model when a much more correct and more closely related model would be a spelling/grammar checker? Especially considering the harm done by, intentionally or not, encouraging the idea that LLMs are intelligent, have feelings, or are creative on the level of a human. All of which I have seen. This technology is already deciding criminal sentences, writing legal briefs and driving cars.
@pa28 ...yet, applying what we might call the 'little mind' model to these systems, people with no knowledge of their actual workings are able to interact with them effectively - often because the interface with the user is designed with just that kind of interaction in mind. So, it's kind of a self-fulfilling prophecy but, within narrow limits, most people seem to succeed when applying the 'little mind' model to such systems, which justifies continued use of that model, to them.