Just read an article about chatgpt "lying" about someone being dead and it's like "I asked it for the link to the obituary but it doubled down on the lie and provided a fake link instead" and I'm begging people to understand that it's a statistical model, it doesn't know it's lying so it can't "double down". The statistically probably answer to "where is the link" is not "I'm sorry I made it all up", it's a link. It doesn't even know what a link is, it just knows vaguely what one looks like so you're basically asking it "generate a plausible looking link to an obituary in the guardian" and it did

The AI isn't malevolent, it's just NOT AI

@eniko I wonder too if the prevalence of dead links has 'taught' it that links that don't resolve are statistically likely.
@ReverendMoose It’s better/worse than that: it’s not checking those links at all.
@smiteri @ReverendMoose
How can it? Unlike BingChat, ChatGPT has no access to the Internet apart from prompts fed to it by users. Any links have to be reconstructed from "memory" which consists of a 100 billion parameter pretrained transformer neural network.