The biggest question for me about large language model interfaces - ChatGPT, the new Bing, Google's Bard - is this:

How long does it take for regular users (as opposed to experts, or people who just try them once or twice) to convince themselves that these tools frequently makes things up that aren't accurate?

And assuming they figure this out, how does knowing it affect the way they use these tools?

@simon I fully expect chatbots to turbocharge the existing hatred of experts. Instead of "ChatGPT told me something incorrect", most people will just say "hah, ChatGPT proves those experts are wrong."

And if everyone has access to ChatGPT and maybe 1/5 of the population even knows an SME?

Maybe every field will look like medicine does now (even pre-pandemic), where virtually every non-expert believes a half-dozen impossible things they heard once without context.