I asked #chatgpt to write me an essay, with citations, about a scientific topic I know very well (bat crawling behaviour). It wrote a bad essay that didn’t really say much, but cited two papers with which I wasn’t familiar, in very good journals. I was freaked out! How did I not know those papers?
Turns out, chatgpt just made them up. The papers don’t exist. That’s a handy thing to know.
@riskindan Screenshot, please. Was the ability to lie, and predilection to do so, deliberately programmed into #chatgpt #gpt3 #chatgpt3 #AI #chatbot
@tolortslubor @riskindan Large language model AI doesn't come with any sort of concept of truth: the only objects it handles are words and their connections. The problem isn't that it's built to intentionally lie, but that preventing this sort of machine producing untrue statements is immensely difficult: it produces a probabilistic response based on the prompt, without a check of external truth (and if you think "well that sounds dangerous"... yes, it very much is.)

@JubalBarca @tolortslubor @riskindan

My take on it is: GPT-3 doesn't have *structure*.

Wanna have structure? You've got to give it yourself to the AI in your prompt.

It's actually good for fluffing up raw, cold, structured data... and make it nicely human readable.

@JubalBarca @tolortslubor @riskindan Been thinking about this. I wonder about filtering LLM output through semantic linked data.

@JubalBarca @tolortslubor @riskindan
An intuitive explanation I've heard is:

ChatGPT just wants to tell you a cool story.