@Catvalente The other thing with this is that "hallucinate" implies that producing accurate output without fabrications is the normal way an LLM works, and "hallucination" is some failure of this that may be treatable.
No. Fabricating linguistically plausible output is exactly what we expect a "language model" to do. The technology, with some tuning, may do surprisingly well at producing accurate output, but it still is fundamentally a model of language, and there's no particular reason to believe it's possible to make a version that isn't prone to hallucination without a completely different methodology than how LLMs work.