Just read an article about chatgpt "lying" about someone being dead and it's like "I asked it for the link to the obituary but it doubled down on the lie and provided a fake link instead" and I'm begging people to understand that it's a statistical model, it doesn't know it's lying so it can't "double down". The statistically probably answer to "where is the link" is not "I'm sorry I made it all up", it's a link. It doesn't even know what a link is, it just knows vaguely what one looks like so you're basically asking it "generate a plausible looking link to an obituary in the guardian" and it did

The AI isn't malevolent, it's just NOT AI

@eniko @browren

1. You say ChatGPT is not AI. OpenAI, the makers of ChatGPT, say it is.

2. The essay in question deals with non-maleficence, which is an attribute that does not require agency, let alone actual intelligence.

@paezha @browren yeah cause saying it's AI is a lot better marketing than saying it's a very large statistical language model
@eniko @paezha @browren
That's been true since the inception of the term AI by Dartmouth college professor, John McCarthy in 1956. It was essentially clickbait of the last century
https://youtu.be/_iMItrc0ChU?t=8m45s
Patrick Boyle discusses early use of the term AI at 8:45 in his video.
Artificial Intelligence.

YouTube