The biggest question for me about large language model interfaces - ChatGPT, the new Bing, Google's Bard - is this:

How long does it take for regular users (as opposed to experts, or people who just try them once or twice) to convince themselves that these tools frequently makes things up that aren't accurate?

And assuming they figure this out, how does knowing it affect the way they use these tools?

@simon I work in a School Division, and what some are starting to find is that there is currently a limit to how good these are and while the reach of information is very wide, the depth is not.

There are frequent mistakes and downright plagiarism occurring in some of the responses that the Machine Learning system is providing.

This technology will make things easier, but does not eliminate the need to validate the information.

@rlitchfield are your students firguring that out? How does their usage of these tools change once they realize how inaccurate they can be?
@simon Junior/High school students that are using these technologies to cheat are not the type to look to deeply as all they want is a shortcut. It is the teacher's that are seeing how weak some of the results are.
@rlitchfield How does a students opinions of the technology change over time, in particular after the second or third time they've been caught using it because it gave them facts that were obviously untrue and were marked as such?