Anthropomorphizing AI is dangerous: it causes emotional harms and it can derail policy debates. AI developers and journalists need to stop enabling this tendency, and we need research on how people interact with chatbots to create better guardrails. We also come up with a more nuanced message than “don’t anthropomorphize AI”. Perhaps the term anthropomorphize is so broad and vague that it has lost its usefulness when it comes to generative AI. https://aisnakeoil.substack.com/p/people-keep-anthropomorphizing-ai By @sayashk and me.
People keep anthropomorphizing AI. Here’s why

Companies and journalists both contribute to the confusion

AI Snake Oil
@randomwalker @sayashk every new thing I read from you is a highlight! Thanks for doing this work. Something that my team has been working with is trying to center the humans involved in developing, deploying, and using these technologies. It’s been difficult, but a really helpful exercise in challenging the habit of obfuscating the people in the process. It’s not necessarily always the right approach, but the practice helps shine light on things anew.