From a live tweet of the proceedings around the lawyer caught using ChatGPT:

"I thought ChatGPT was a search engine".

It is NOT a search engine. Nor, by the way are the version of it included in Bing or Google's Bard.

Language model-driven chatbots are not suitable for information access.

>>

@emilymbender - but the misunderstanding is understandable, as providers of these "AI"-driven services are extremely, seemingly deliberately obtuse about what they're actually for.

Because if they said "this is a machine that can imitate sounding factual but has absolutely no fact-checking layer or mechanism, nor are we planning to add any such thing", then most people would wonder what the hell they're supposed to use it for at all.

@jwcph @emilymbender But they have said that. Repeatedly. And people have been yelling about that since ChatGPT was released. Repeatedly.

We can infantilize the public and put all of it on the companies, or hold people accountable for their mistakes. Disbar the lawyer, the others will learn right quick. Otherwise it's like banning Tide because some people thought eating Tide pods was fun.

@bradedwards @emilymbender Please provide three examples.

@jwcph @emilymbender

After a certain point, "Don't touch the electric fence" means don't touch it.

@bradedwards @emilymbender I get what you mean, but at no point do the vendors (!) tell people that the thing *was not designed to be accurate, and has no mechanism for accuracy at all*. That's why they call it "hallucinations", implying that it's a glitch, and not the thing doing exactly what it's designed to do: Make. Sh*t. Up.

@jwcph @emilymbender ChatGPT, Bard, Pi, etc haven't been marketed as truth engines. There's a million pounds of info out about how they're not and how LLMs work. For free. Everywhere online.

I'm just entirely unsympathetic to a professional who abdicates their responsibility by adding a tool to their practice without even a modicum of research into how it works. I don't see how that level of negligence bears on the chat bot providers.

@bradedwards @emilymbender Oh, me too; I'm not feeling sorry for that guy at all - but I don't agree that these things aren't marketed as truth engines, because that's exactly what is implied. Especially since, as I keep saying, the vendors never tell people that it is *not designed for accuracy at all & has no mechanism for fact checking, nor is it intended to have one*. Also, the whole "hallucinations" thing.