Had a thought... If an LLM could weigh the credibility of sources of input, would that help with truth/hallucinations? It seems like the problem is they pull in input indiscriminately.
I don't see a good way to determine 'credibility' in an automated way. What I think I'm talking about is an opinionated LLM. Which I don't necessarily have a problem with. Sarcasm detection could be difficult.
I feel like this is a thing that should be obvious to people closer to the problem. Hm.