The biggest question for me about large language model interfaces - ChatGPT, the new Bing, Google's Bard - is this:

How long does it take for regular users (as opposed to experts, or people who just try them once or twice) to convince themselves that these tools frequently makes things up that aren't accurate?

And assuming they figure this out, how does knowing it affect the way they use these tools?

Someone must have done research on this, right? It feels pretty fundamental!

One argument here is that people will blindly trust any chatbot that supports their previous biases

Is that cynicism justified?

What happens when the chatbot speaks against their biases? In particular, what if it both counters their biases AND does so in a way that is demonstrably factually incorrect?

We are already seeing furious complaints from some corners that ChatGPT has a liberal bias - how does that affect how those complainants trust and use these tools?

Hindu nationalists are FURIOUS about ChatGPT right now: https://www.wired.com/story/chatgpt-has-been-sucked-into-indias-culture-wars/

How will that impact their trust of systems like this in the future?

ChatGPT Has Been Sucked Into India's Culture Wars

Hindu nationalists claim that the chatbot has insulted their deities, sparking an online uproar.

WIRED
@simon This isn't really cynicism, I think it's more an optimistic view of people.

@simon No research, but after an afternoon of 'playing' with Chat GPT, I had worked out it's limitations.

My takeaway, and note of optimism, is that people will be able to 'smell' bot-generated text quite easily. Whether they'll care is another discussion.

@simon To be fair though, they also thought a plain red cup had a liberal bias.
@simon I think we're going to see more ChatGPTs out there and my guess is that they are going to attract different people based on their biases. People select their echo chambers in social media and we've seen the feedback loop it has produced with respect to political extremism. I think we're about to see another feedback loop with ChatGPTs. That is, people seeking out models that confirm their biases, which then drives them to produce biased content to feed back into it, and repeat.
@sebleier What will happen when a right-leaning chatbot gains popularity, but then people figure out ways to trick it into supporting left wing talking points and start sharing prompts and screenshots?
@simon – People are Bayesian by nature, so depending on how they prioritize truth vs. satisfying their biases, you'll see some people dock their favorite chat bot a few points if it spouts an opposing ideology. If it gets to a certain point, you'll see a phase transition and you may see people migrate to another platform. I see it as analogous to the recent migration of people moving from Fox News to OANN or Newsmax.
@simon have you turned on any US political news in the last 8 years? I think that the idea that there is such a thing as a consensus view of “demonstrably factually incorrect” is a statement so bold as to be unsupportable

@glyph My question remains: if a right-leaning person encounters replies from ChatGPT that directly counters their existing beliefs (and which they can fact check through other sources), do they stop believing that ChatGPT is an infallible source of information?

Even if their conclusion is "It's a conspiracy! The chatbot has been neutered!", does it still provide some level of protection for them in terms of helping them understand that these things are deeply fallible?

@simon Their epistemic foundation is culturally authoritarian, not empirical, and I don't think they'll perceive ChatGPT itself as an agent with its own authority, more like an esoteric fountain of information to be incorporated into their (already incoherent) syncretic model of the world. So they'll poke at it until it reveals some "hidden truth" and they'll believe or not-believe various its various mumblings on a case-by-case basis.
@simon like the entire concept of syncretism is such a wild ride. Someone like e.g. Jordan Peterson is already LLM-esque in his "intellectual" output: he will take words that are similar even like… phonetically… or refer to concepts with geometrically similar visualizations as "the same"; happily cherry-picking from scientific literature looking for confirmation of their biases
@simon from an empirical epistemic viewpoint, you'd expect that if they're citing scientific studies, the locus of authority is in empirical observations and the process of peer review; but no, the authority comes from the bias-confirming authority of the filter (your Peterson or Shapiro or Crowder) telling you *which* studies are the right ones to trust, for some reason
@simon so I think that ChatGPT will occupy the same spot in the hierarchy of authority as "science", which is to say that the various grifter/preachers will mine it for confirmation bias, discard everything it produces that they don't like, repeat everything it says that they do like as secretly true, and very few individual rank-and-file right-wingers will bother to interact with it directly