#LLM s will pretty much always have hallucination problems. Companies and the media present it as "currently 5% of everything GPT says is a hallucination but in 2 years we'll get it down to 0.01%". But the actual problem is that the model is incapable of reasoning about anything or modeling anything other than the relationship between words in a language. Everything it says is a "hallucination" in the sense that saying something correct was never a design goal #ai #aiethics
When I did my master's, we learned this tale of the CEO of Borders who was offered to create an online books division but declined, believing the internet to be a passing fad. Of course, Borders went out of business after Amazon took its place. I think we are so afraid of looking like Borders we're now afraid to point out very obvious problems with technology lest we date ourselves and look silly. Since #ai is the future we'd rather say nothing than say something and risk being wrong #aiethics
@MoBlack large language models are always hallucinating. Sometimes they just happen to be factually correct.