Anthropomorphizing AI is dangerous: it causes emotional harms and it can derail policy debates. AI developers and journalists need to stop enabling this tendency, and we need research on how people interact with chatbots to create better guardrails. We also come up with a more nuanced message than “don’t anthropomorphize AI”. Perhaps the term anthropomorphize is so broad and vague that it has lost its usefulness when it comes to generative AI. https://aisnakeoil.substack.com/p/people-keep-anthropomorphizing-ai By @sayashk and me.
@randomwalker @sayashk
Amen! I keep speaking about this, and have to keep disciplining myself not to slide into using human-like terms when speaking about the bot.
I just finished reading the "Stochastic Parrot" article by @emilymbender et al. yesterday. The paper is from early 2021 but they clearly see that anthropomorphizing is going to be a big danger (and explain how the dang things work and where the bias comes from).
Thank you for putting so many links into your article!