From a live tweet of the proceedings around the lawyer caught using ChatGPT:

"I thought ChatGPT was a search engine".

It is NOT a search engine. Nor, by the way are the version of it included in Bing or Google's Bard.

Language model-driven chatbots are not suitable for information access.

>>

@emilymbender - but the misunderstanding is understandable, as providers of these "AI"-driven services are extremely, seemingly deliberately obtuse about what they're actually for.

Because if they said "this is a machine that can imitate sounding factual but has absolutely no fact-checking layer or mechanism, nor are we planning to add any such thing", then most people would wonder what the hell they're supposed to use it for at all.

@jwcph @emilymbender But they have said that. Repeatedly. And people have been yelling about that since ChatGPT was released. Repeatedly.

We can infantilize the public and put all of it on the companies, or hold people accountable for their mistakes. Disbar the lawyer, the others will learn right quick. Otherwise it's like banning Tide because some people thought eating Tide pods was fun.

@bradedwards @emilymbender Please provide three examples.

@jwcph @emilymbender

OpenAI ChatGPT:
"ChatGPT may produce inaccurate information about people, places, or facts. ChatGPT May 24 Version"

https://openai.com/gpt-4

"GPT-4 still has many known limitations that we are working to address, such as social biases, hallucinations, and adversarial prompts. We encourage and facilitate transparency, user education, and wider AI literacy as society adopts these models. We also aim to expand the avenues of input people have in shaping our models."

GPT-4

AI Doesn’t Hallucinate. It Makes Things Up

There’s been so much talk about AI hallucinating that it’s making me feel like I’m hallucinating. But first…

Bloomberg

@jwcph @emilymbender

After a certain point, "Don't touch the electric fence" means don't touch it.

@bradedwards @emilymbender I get what you mean, but at no point do the vendors (!) tell people that the thing *was not designed to be accurate, and has no mechanism for accuracy at all*. That's why they call it "hallucinations", implying that it's a glitch, and not the thing doing exactly what it's designed to do: Make. Sh*t. Up.
@jwcph @emilymbender Sort of? Bing, you.com, Scite etc. They are explicitly positioning their tools as accurate within the bounds of what is accurate for their markets; search, academic databases, etc. And LLM systems can be engineered for that, primarily by using them only as summarization engines over standard search. I haven't seen any data comparing their accuracy to normal search.

@jwcph @emilymbender ChatGPT, Bard, Pi, etc haven't been marketed as truth engines. There's a million pounds of info out about how they're not and how LLMs work. For free. Everywhere online.

I'm just entirely unsympathetic to a professional who abdicates their responsibility by adding a tool to their practice without even a modicum of research into how it works. I don't see how that level of negligence bears on the chat bot providers.

@bradedwards @emilymbender Oh, me too; I'm not feeling sorry for that guy at all - but I don't agree that these things aren't marketed as truth engines, because that's exactly what is implied. Especially since, as I keep saying, the vendors never tell people that it is *not designed for accuracy at all & has no mechanism for fact checking, nor is it intended to have one*. Also, the whole "hallucinations" thing.