Just read an article about chatgpt "lying" about someone being dead and it's like "I asked it for the link to the obituary but it doubled down on the lie and provided a fake link instead" and I'm begging people to understand that it's a statistical model, it doesn't know it's lying so it can't "double down". The statistically probably answer to "where is the link" is not "I'm sorry I made it all up", it's a link. It doesn't even know what a link is, it just knows vaguely what one looks like so you're basically asking it "generate a plausible looking link to an obituary in the guardian" and it did

The AI isn't malevolent, it's just NOT AI

@eniko

You are technically, correct, which is the worst kind of correct. You aren't going to be able to educate the general public, even moderately intelligent individuals. The perception is that it IS an AI. And people will base decisions off it it.

It doesn't matter what definitions it does or does not fit, because letting this horribly broken thing into the wild will lead to increase in harm (to both individuals and society).

@atatassault It is an AI, a language model is an AI, it is artificial and it is intelligent to some degree, i like to compare this level of AI to a somewhat trained animal, specifically i think of some insect or very small animal with a simple neurological system, people just expect human level since it can talk, but it can talk because that is all that it can do.
From an AI i expect intelligence but not necessarily reason, awareness or thoughts.