The biggest question for me about large language model interfaces - ChatGPT, the new Bing, Google's Bard - is this:

How long does it take for regular users (as opposed to experts, or people who just try them once or twice) to convince themselves that these tools frequently makes things up that aren't accurate?

And assuming they figure this out, how does knowing it affect the way they use these tools?

Someone must have done research on this, right? It feels pretty fundamental!

One argument here is that people will blindly trust any chatbot that supports their previous biases

Is that cynicism justified?

What happens when the chatbot speaks against their biases? In particular, what if it both counters their biases AND does so in a way that is demonstrably factually incorrect?

We are already seeing furious complaints from some corners that ChatGPT has a liberal bias - how does that affect how those complainants trust and use these tools?

@simon have you turned on any US political news in the last 8 years? I think that the idea that there is such a thing as a consensus view of “demonstrably factually incorrect” is a statement so bold as to be unsupportable

@glyph My question remains: if a right-leaning person encounters replies from ChatGPT that directly counters their existing beliefs (and which they can fact check through other sources), do they stop believing that ChatGPT is an infallible source of information?

Even if their conclusion is "It's a conspiracy! The chatbot has been neutered!", does it still provide some level of protection for them in terms of helping them understand that these things are deeply fallible?

@simon Their epistemic foundation is culturally authoritarian, not empirical, and I don't think they'll perceive ChatGPT itself as an agent with its own authority, more like an esoteric fountain of information to be incorporated into their (already incoherent) syncretic model of the world. So they'll poke at it until it reveals some "hidden truth" and they'll believe or not-believe various its various mumblings on a case-by-case basis.
@simon like the entire concept of syncretism is such a wild ride. Someone like e.g. Jordan Peterson is already LLM-esque in his "intellectual" output: he will take words that are similar even like… phonetically… or refer to concepts with geometrically similar visualizations as "the same"; happily cherry-picking from scientific literature looking for confirmation of their biases
@simon from an empirical epistemic viewpoint, you'd expect that if they're citing scientific studies, the locus of authority is in empirical observations and the process of peer review; but no, the authority comes from the bias-confirming authority of the filter (your Peterson or Shapiro or Crowder) telling you *which* studies are the right ones to trust, for some reason
@simon so I think that ChatGPT will occupy the same spot in the hierarchy of authority as "science", which is to say that the various grifter/preachers will mine it for confirmation bias, discard everything it produces that they don't like, repeat everything it says that they do like as secretly true, and very few individual rank-and-file right-wingers will bother to interact with it directly