“Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’”

More accurately, AI researchers have always said that this isn’t fixable but y’all were too obsessed with listening to con artists to pay attention but now the con is wearing thin. https://fortune.com/2023/08/01/can-ai-chatgpt-hallucinations-be-fixed-experts-doubt-altman-openai/

Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’

Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

Fortune

@baldur One of the best ways to predict the next word is to understand the meaning. In some cases, the AI training is going to produce an internal model that actually does match the meaning because that’s going to earn it reinforcement.

But since we can’t really read out what the AI’s reasoning is, we can only train them more and more and can’t tell when they have hit on the correct internal model, and when they are just making a lot of good guesses via approximations. (Ct’d)

@baldur Or can we? Can we ask hard questions, new questions, that can be answered only if you have the proper model of the underlying problem?

Maybe. But even then, there would be no guarantee that the AI would apply that model to the next question, and not some other rule of thumb that it has learned based on some other word in our next question about the “same” topic.