“Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’”

More accurately, AI researchers have always said that this isn’t fixable but y’all were too obsessed with listening to con artists to pay attention but now the con is wearing thin. https://fortune.com/2023/08/01/can-ai-chatgpt-hallucinations-be-fixed-experts-doubt-altman-openai/

Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’

Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

Fortune

@baldur exactly this:

(Quote from the article):”””
“This isn’t fixable,” said Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory. “It’s inherent in the mismatch between the technology and the proposed use cases.”
“””

It’s like trying to solve the problem that cars don’t provide any nutritional value. The idea of using LLMs to provide detailed factual information is just not tenable. That’s not what they are.

@knbrindle @baldur @robabram
Yep. We had an enquiry at SEN Magazine the other day asking about an article on [very detailed and specific title] written by [name of reputed researcher in that field], but we couldn't find any trace of the article, nor any article by that author. Turns out the enquirer had obtained the 'article' details from ChatGPT, but it was just something that the named author might plausibly have written, and that we might plausibly have published. But hadn't.
@jern I’m still waiting for the first patent application with fantasy cited prior art. Mind you, as hardly any applicants mention any concrete prior art these days, with or without LLM assistance, it could be a long wait.
@robabram Ooh, dangerous. "Applicant acknowledges that [non-existent but innovative technical solution disclosure] forms part of the prior art...