Anthropomorphizing AI is dangerous: it causes emotional harms and it can derail policy debates. AI developers and journalists need to stop enabling this tendency, and we need research on how people interact with chatbots to create better guardrails. We also come up with a more nuanced message than “don’t anthropomorphize AI”. Perhaps the term anthropomorphize is so broad and vague that it has lost its usefulness when it comes to generative AI. https://aisnakeoil.substack.com/p/people-keep-anthropomorphizing-ai By @sayashk and me.
People keep anthropomorphizing AI. Here’s why

Companies and journalists both contribute to the confusion

AI Snake Oil
@randomwalker @sayashk this is hard in part because of the natural human tendency to use narratives to make sense of data. Plus the builders of these systems seek to make them more usable with human-like features we can interact with. The early pioneers like Turing wanted to 'talk' with computers

@randomwalker @sayashk same for physical robots and indeed anything that humans will be liable to unnaturally emote.

That's intentionally broad because it is a vulnerability that we have been conditioned to ignore and causes enormous damage by undermining individual and collective reasoning and autonomy.

@randomwalker @sayashk
Amen! I keep speaking about this, and have to keep disciplining myself not to slide into using human-like terms when speaking about the bot.

I just finished reading the "Stochastic Parrot" article by @emilymbender et al. yesterday. The paper is from early 2021 but they clearly see that anthropomorphizing is going to be a big danger (and explain how the dang things work and where the bias comes from).

Thank you for putting so many links into your article!

@randomwalker @sayashk every new thing I read from you is a highlight! Thanks for doing this work. Something that my team has been working with is trying to center the humans involved in developing, deploying, and using these technologies. It’s been difficult, but a really helpful exercise in challenging the habit of obfuscating the people in the process. It’s not necessarily always the right approach, but the practice helps shine light on things anew.
@randomwalker @sayashk For this exact question of how to talk about AI pls also see "The Language Labyrinth: Constructive Critique on the Terminology Used in the AI Discourse" https://doi.org/10.16997/book55.f (open-access)

@randomwalker @sayashk when someone refers to Siri or Alexa as *she," be sure to say "it's an it."

Hilariously, some people have no problem assigning a gender to this computer program but have big problems using the correct pronouns for many human beings.

@randomwalker @sayashk if you figure out how to stop people doing that, once you've go that all stopped, would you mind also stopping people anthropomorphizing cars?