“If the LLM produces a wild result, something that doesn’t meet with my expectations *then* I’ll turn to more reliable sources. I’m not blindly following just anything that says”

People feel that this being a “responsible user of new technology”

I think it is actually the opposite.
1/2

The most exciting and pivotal moments in research are those times when the results do not meet your expectations.

We live for those moments.

If an LLM is not reliable enough for you to trust unexpected results then it is not reliable enough to tell you anything new: it’s incapable of telling you anything that you don’t at (some level) already know.

2/2

@futurebird Not to mention all the cases where opposing answers could very well be reasonable.

Would not that even be very likely to happen often? Otherwise, why is a user asking the questions in the first place?

Example: suppose I want to check whether a museum is open on a certain day, and google's "ai overview" helpfully provides an answer. How could this policy of expectations filter out an error without truly checking the source material?