The biggest question for me about large language model interfaces - ChatGPT, the new Bing, Google's Bard - is this:

How long does it take for regular users (as opposed to experts, or people who just try them once or twice) to convince themselves that these tools frequently makes things up that aren't accurate?

And assuming they figure this out, how does knowing it affect the way they use these tools?

Someone must have done research on this, right? It feels pretty fundamental!

One argument here is that people will blindly trust any chatbot that supports their previous biases

Is that cynicism justified?

What happens when the chatbot speaks against their biases? In particular, what if it both counters their biases AND does so in a way that is demonstrably factually incorrect?

We are already seeing furious complaints from some corners that ChatGPT has a liberal bias - how does that affect how those complainants trust and use these tools?

@simon I think we're going to see more ChatGPTs out there and my guess is that they are going to attract different people based on their biases. People select their echo chambers in social media and we've seen the feedback loop it has produced with respect to political extremism. I think we're about to see another feedback loop with ChatGPTs. That is, people seeking out models that confirm their biases, which then drives them to produce biased content to feed back into it, and repeat.
@sebleier What will happen when a right-leaning chatbot gains popularity, but then people figure out ways to trick it into supporting left wing talking points and start sharing prompts and screenshots?
@simon – People are Bayesian by nature, so depending on how they prioritize truth vs. satisfying their biases, you'll see some people dock their favorite chat bot a few points if it spouts an opposing ideology. If it gets to a certain point, you'll see a phase transition and you may see people migrate to another platform. I see it as analogous to the recent migration of people moving from Fox News to OANN or Newsmax.