Anthropomorphizing AI is dangerous: it causes emotional harms and it can derail policy debates. AI developers and journalists need to stop enabling this tendency, and we need research on how people interact with chatbots to create better guardrails. We also come up with a more nuanced message than “don’t anthropomorphize AI”. Perhaps the term anthropomorphize is so broad and vague that it has lost its usefulness when it comes to generative AI. https://aisnakeoil.substack.com/p/people-keep-anthropomorphizing-ai By @sayashk and me.
People keep anthropomorphizing AI. Here’s why

Companies and journalists both contribute to the confusion

AI Snake Oil
@randomwalker @sayashk this is hard in part because of the natural human tendency to use narratives to make sense of data. Plus the builders of these systems seek to make them more usable with human-like features we can interact with. The early pioneers like Turing wanted to 'talk' with computers