My latest newsletter: The Chatbot Isn't Your New Best Friend (And Never Will Be) https://www.readtpa.com/p/your-chatbot-isnt-your-new-best-friend?sd=pf
Your Chatbot Isn't Your New Best Friend (And Never Will Be)

Yes, I let a chatbot write the headline to this piece about why it's irresponsible for news organizations to publish stories pretending that chatbots are sentient.

The Present Age
@parkermolloy It’s fascinating that, even after it admits it has no feelings, it continues to output that it was a “pleasure” to help you, and similar language. Likewise, even after being challenged, it keeps referring to itself as “I”. My guess is that a lot of the apparent sentience these chatbots have is because of these kind of very simple rhetorical tricks.

@parkermolloy
the fact that a few thoughtfully chosen prompts causes the chatbot to respond with these criticisms shows that these criticisms were all in its training data sets, and occurred frequently. All the problems these chatbots will cause were known in advance, and avoidable.

#AIHype

@parkermolloy
Very interesting - thank you. It's revealing not just of what the chatbots are doing (and a very "honest" bot you were interacting with in that exchange), but of how English language uses emotional terms for what are actually non-emotional interactions. We do it human-to-human too, of course, which is why it is part of the language model the bots are using.

I couldn't help but be reminded of talks in Star Trek, between human characters and Data, and between humans and Vulcans.

@parkermolloy "stories pretending that chatbots are not sentient."

Is that not supposed to be now or am I misreading the negative? Seemed mismatched with the rest but I am also very brain tired.

@parkermolloy It would be interesting to see if you could prompt it to not use anthropomorphic language about itself. (e.g., "Respond without conveying that you are happy or have any other emotional or sentient response which you do not truly have.")
@parkermolloy my chatbot sent me this 😅