"We must stop giving AI human traits. My first interaction with GPT-3 rather seriously annoyed me. It pretended to be a person. It said it had feelings, ambitions, even consciousness.

That’s no longer the default behaviour, thankfully. But the style of interaction — the eerily natural flow of conversation — remains intact. And that, too, is convincing. Too convincing.

We need to de-anthropomorphise AI. Now. Strip it of its human mask. This should be easy. Companies could remove all reference to emotion, judgement or cognitive processing on the part of the AI. In particular, it should respond factually without ever saying “I”, or “I feel that”… or “I am curious”.

Will it happen? I doubt it. It reminds me of another warning we’ve ignored for over 20 years: “We need to cut CO₂ emissions.” Look where that got us. But we must warn big tech companies of the dangers associated with the humanisation of AIs. They are unlikely to play ball, but they should, especially if they are serious about developing more ethical AIs.

For now, this is what I do (because I too often get this eerie feeling that I am talking to a synthetic human when using ChatGPT or Claude): I instruct my AI not to address me by name. I ask it to call itself AI, to speak in the third person, and to avoid emotional or cognitive terms.

If I am using voice chat, I ask the AI to use a flat prosody and speak a bit like a robot. It is actually quite fun and keeps us both in our comfort zone."

https://theconversation.com/we-need-to-stop-pretending-ai-is-intelligent-heres-how-254090?utm_source=firefox-newtab-en-gb

#AI #GenerativeAI #LLMs #Chatbots #Intelligence #Anthropomorphization

We need to stop pretending AI is intelligent – here’s how

AI may appear human, but it is an illusion we must tackle.

The Conversation

@clacke

Re. Not anthropomorphizing LLMs

I'm a sucker for this. Thankyou for writing about it. I'll apologise to an inanimate object if I walk into it.

I find useful practical tips for myself in following this to be:
1. Use the verb "I prompted" rather than I told or I asked.
2. State that the program "output" rather than it replied.
3. I don't discuss "confabulation" because it's an anthropomorphization (the reality is that the computer program is doing exactly what it is instructed to do by the user), but if I was compelled to anthropomorphize, I would use "confabulation" rather than hallucination.

I would be curious to know if you or any other readers had any more tips!

The following cartoon is from:
https://www.smbc-comics.com/comic/precise

#LLM #AI #GAN #programming #language #linguistics #metacognition #philosophy #computers #anthropomorphization #maths #mathematics #math

Saturday Morning Breakfast Cereal - Precise

Saturday Morning Breakfast Cereal - Precise

@philosophy Talk ahead in our upcoming workshop "Mensch Metapher Maschine. Das Selbst im Spiegel der Technik | Die Technik im Spiegel des Selbst" in Luxemburg on Monday and Tuesday.

I will analyze public and scientific practices of anthropomorphizing the non-human and of human self-#technomorphization.

#Philosophie #philosophy #AI #KI #Anthropomorphization

Here’s how people are actually using AI

Analyzed a million ChatGPT interaction logs and found the second most popular use of AI was sexual role-playing. Aside from that, the overwhelmingly most popular use case for the chatbot was creative composition. People also liked to use it for brainstorming and planning, asking for explanations and general information

#ArtificialIntelligence #AI #LLM #GenAI #chatbot #sex #misinformation #anthropomorphization #technology #tech

https://www.technologyreview.com/2024/08/12/1096202/how-people-actually-using-ai/

Here’s how people are actually using AI

Something peculiar and slightly unexpected has happened: people have started forming relationships with AI systems.

MIT Technology Review

OpenAI itself warning that GPT-4o’s capabilities seem to be causing some users to become increasingly attached to the chatbot, with potentially worrying consequences.

“anthropomorphization and emotional reliance,” which “involves attributing human-like behaviors and characteristics to nonhuman entities, such as AI models.”

#OpenAI #ChatGPT #chatbot #GenAI #LLM #ArtificialIntelligence #AI #anthropomorphization #psychology #technology #tech

https://www.techradar.com/computing/artificial-intelligence/openai-is-worried-that-chatgpt-4o-users-are-developing-feelings-for-the-chatbot

OpenAI is worried that ChatGPT-4o users are developing feelings for the chatbot

What happens when you forget an AI isn’t a real person?

TechRadar
In this house we anthropomorphize inanimate objects. Our Peugot 107 is named Adora Belle Dearheart from Terry Pratchett. Our washing machine is named James like Jesse and James from Team Rocket because it sounds like he's going to take off and between the two of the gay Millenial icons seemed the more likely to take a load.

Now I have a bike. She weighs 17kg. She's a big girl. I felt that she was a Pam. Seemed like a good name for a apple-bottomed, southern girl in a sundress with a pollyanna mentality that would be down for a good ride in the country. Say hi to Pam. I ride her 5 km to work everyday.

#bike #anthropomorphization #life
Leidenfrost Effect. Macro. Slow Mo. #shorts #slowmo

YouTube

It’s No Wonder People Are Getting Emotionally Attached to Chatbots

Research in human-computer and human-robot interaction shows that we love to anthropomorphize—attribute humanlike qualities, behaviors, and emotions to—the nonhuman agents we interact with

#anthropomorphization #chatbots #ArtificialIntelligence #AI #GenAI #LLM #psychology #technology #tech

https://www.wired.com/story/its-no-wonder-people-are-getting-emotionally-attached-to-chatbots/

It’s No Wonder People Are Getting Emotionally Attached to Chatbots

AI chatbots can be friendly and responsive—even sexy. It’s time to take these fundamentally human behaviors more seriously.

WIRED