Just read an article about chatgpt "lying" about someone being dead and it's like "I asked it for the link to the obituary but it doubled down on the lie and provided a fake link instead" and I'm begging people to understand that it's a statistical model, it doesn't know it's lying so it can't "double down". The statistically probably answer to "where is the link" is not "I'm sorry I made it all up", it's a link. It doesn't even know what a link is, it just knows vaguely what one looks like so you're basically asking it "generate a plausible looking link to an obituary in the guardian" and it did

The AI isn't malevolent, it's just NOT AI

@eniko People anthropomorphise their Roombas. There’s no hope for most to even begin to understand that #ChatGPT isn’t thinking or doesn’t have wants or attitude. A simile: in the 1970s there were members of my family who thought the TV presenter was talking to them specifically. ChatGPT hits a combination of plausible, expected and overconfident that will, at best, leave another AI winter on its wake after its limitations become common knowledge. At worst, will destroy trust in communications.