Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’

https://lemmy.world/post/2504608

Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’ - Lemmy.world

Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

I was excited for the recent advancements in AI, but seems the area has hit another wall. Seems it is best to be used for automating very simple tasks, or at best used as a guiding tool for professionals (ie, medicine, SWE, …)

Hallucinations is common for humans as well. It’s just people who believe they know stuff they really don’t know.

We have alternative safeguards in place. It’s true however that current llm generation has its limitations

Sure, but these things exists as fancy story tellers. They understand language patterns well enough to write convincing language, but they don’t understand what they’re saying at all.

The metaphorical human equivalent would be having someone write a song in a foreign language they barely understand. You can get something that sure sounds convincing, sounds good even, but to someone who actually speaks Spanish it’s nonsense.

GPT can write and edit code that works. It simply can’t be true that it’s solely doing language patterns with no semantic understanding.
Because it can look up code for this specific problem in its enormous training data? It doesnt need to understand the concepts behind it as long as the problem is specific enough to have been solved already.

I can tell GPT to do a specific thing in a given context and it will do so intelligently. I can then provide additional context that implicitly changes the requirements and GPT will pick up on that and make the specific changes needed.

It can do this even if I’m trying to solve a novel problem.

But the naysayers will argue that your problem is not novel and a solution can be trivially deduced from the training data. Right?

I really dislike the simplified word predictor explanation that is given for how LLM’s work. It makes it seem like the thing is a lookup table, while ignoring the nuances of what makes it work so well.

But the naysayers will argue that your problem is not novel and a solution can be trivially deduced from the training data. Right?

Yes, obviously. Unless @Serdan is publishing papers about their solutions to previously unsolved computation problems, we should assume that by “novel problem” they actually just mean “a mundane problem for which every step of the solution is trivial, even if they’ve never been combined in that exact order before”.