@parkermolloy
the fact that a few thoughtfully chosen prompts causes the chatbot to respond with these criticisms shows that these criticisms were all in its training data sets, and occurred frequently. All the problems these chatbots will cause were known in advance, and avoidable.
@parkermolloy
Very interesting - thank you. It's revealing not just of what the chatbots are doing (and a very "honest" bot you were interacting with in that exchange), but of how English language uses emotional terms for what are actually non-emotional interactions. We do it human-to-human too, of course, which is why it is part of the language model the bots are using.
I couldn't help but be reminded of talks in Star Trek, between human characters and Data, and between humans and Vulcans.
@parkermolloy "stories pretending that chatbots are not sentient."
Is that not supposed to be now or am I misreading the negative? Seemed mismatched with the rest but I am also very brain tired.
That was a great article!