“Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’”

More accurately, AI researchers have always said that this isn’t fixable but y’all were too obsessed with listening to con artists to pay attention but now the con is wearing thin. https://fortune.com/2023/08/01/can-ai-chatgpt-hallucinations-be-fixed-experts-doubt-altman-openai/

Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’

Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

Fortune

@baldur Good to see some down to earth quotes in that article, like how these systems “are designed to make things up. That’s all they do”. I always try to push that ‘hallucinations’ is a bad term because there’s no difference between a ‘hallucination’ and an LLM’s regular generated output - it’s just generating content, the model doesn’t ‘understand’ anything so it can’t understand the concepts of truth or accuracy.

Maybe the output generated for some prompt is accurate, maybe it’s complete nonsense - it’s just chance.

Alex Chaffee (@[email protected])

@[email protected] It occurred to me this morning that for #LLMs at least, the “I” in #AI stands for “improv” Maybe folks would be a little less likely to entrust their life decisions to a machine if they thought of it as an underemployed half-drunk actor trying to impress its buddies by making jokes on stage in a seedy L.A. nightclub.

Ruby.social