From a live tweet of the proceedings around the lawyer caught using ChatGPT:

"I thought ChatGPT was a search engine".

It is NOT a search engine. Nor, by the way are the version of it included in Bing or Google's Bard.

Language model-driven chatbots are not suitable for information access.

>>

@emilymbender - but the misunderstanding is understandable, as providers of these "AI"-driven services are extremely, seemingly deliberately obtuse about what they're actually for.

Because if they said "this is a machine that can imitate sounding factual but has absolutely no fact-checking layer or mechanism, nor are we planning to add any such thing", then most people would wonder what the hell they're supposed to use it for at all.

@jwcph @emilymbender But they have said that. Repeatedly. And people have been yelling about that since ChatGPT was released. Repeatedly.

We can infantilize the public and put all of it on the companies, or hold people accountable for their mistakes. Disbar the lawyer, the others will learn right quick. Otherwise it's like banning Tide because some people thought eating Tide pods was fun.

@bradedwards @emilymbender Please provide three examples.

@jwcph @emilymbender

OpenAI ChatGPT:
"ChatGPT may produce inaccurate information about people, places, or facts. ChatGPT May 24 Version"

https://openai.com/gpt-4

"GPT-4 still has many known limitations that we are working to address, such as social biases, hallucinations, and adversarial prompts. We encourage and facilitate transparency, user education, and wider AI literacy as society adopts these models. We also aim to expand the avenues of input people have in shaping our models."

GPT-4