From a live tweet of the proceedings around the lawyer caught using ChatGPT:

"I thought ChatGPT was a search engine".

It is NOT a search engine. Nor, by the way are the version of it included in Bing or Google's Bard.

Language model-driven chatbots are not suitable for information access.

>>

@emilymbender - but the misunderstanding is understandable, as providers of these "AI"-driven services are extremely, seemingly deliberately obtuse about what they're actually for.

Because if they said "this is a machine that can imitate sounding factual but has absolutely no fact-checking layer or mechanism, nor are we planning to add any such thing", then most people would wonder what the hell they're supposed to use it for at all.

@jwcph @emilymbender But they have said that. Repeatedly. And people have been yelling about that since ChatGPT was released. Repeatedly.

We can infantilize the public and put all of it on the companies, or hold people accountable for their mistakes. Disbar the lawyer, the others will learn right quick. Otherwise it's like banning Tide because some people thought eating Tide pods was fun.

@bradedwards @emilymbender Please provide three examples.

@jwcph @emilymbender

After a certain point, "Don't touch the electric fence" means don't touch it.

@bradedwards @emilymbender I get what you mean, but at no point do the vendors (!) tell people that the thing *was not designed to be accurate, and has no mechanism for accuracy at all*. That's why they call it "hallucinations", implying that it's a glitch, and not the thing doing exactly what it's designed to do: Make. Sh*t. Up.
@jwcph @emilymbender Sort of? Bing, you.com, Scite etc. They are explicitly positioning their tools as accurate within the bounds of what is accurate for their markets; search, academic databases, etc. And LLM systems can be engineered for that, primarily by using them only as summarization engines over standard search. I haven't seen any data comparing their accuracy to normal search.