Just read an article about chatgpt "lying" about someone being dead and it's like "I asked it for the link to the obituary but it doubled down on the lie and provided a fake link instead" and I'm begging people to understand that it's a statistical model, it doesn't know it's lying so it can't "double down". The statistically probably answer to "where is the link" is not "I'm sorry I made it all up", it's a link. It doesn't even know what a link is, it just knows vaguely what one looks like so you're basically asking it "generate a plausible looking link to an obituary in the guardian" and it did

The AI isn't malevolent, it's just NOT AI

Dall-e is a statistical model for visual data. It makes visual data that looks like stuff

Chatgpt is a statistical model for text data. It makes text that looks like stuff. That's all. You can't rely on it to produce text that's factually accurate, or code that works, because that's not what it does

That doesn't mean it's useless. When you have to come up with words for something that "look right", like when you're writing an email or ad copy or a self description blurb for a resume, it can generate something for you that looks like the words you need. And for that it works great

@eniko or, the way I look at it, the first thing this AI learned to do was how to tell a convincing lie.