#LLM s will pretty much always have hallucination problems. Companies and the media present it as "currently 5% of everything GPT says is a hallucination but in 2 years we'll get it down to 0.01%". But the actual problem is that the model is incapable of reasoning about anything or modeling anything other than the relationship between words in a language. Everything it says is a "hallucination" in the sense that saying something correct was never a design goal #ai #aiethics