Okay, here's a question about #LLM hallucination:
Anecdotally speaking, just a small amount of the false information I encounter using LLMs are produced as a direct response to my query.
There seems to be much higher chance of misinformation when the LLM gives supplemental context around the main query.
Is this a known phenomenon? If no, I might start a project on the topic.
(Example in reply. 1/3)
#AI #LLMs #ChatGPT #CompLing #ComputationalLinguistics #Computerlinguistik #Linguistik
