Is ChatGPT Getting Worse?
Is ChatGPT Getting Worse?
Yep, definitely. I have a plus subscription, and stuff that was easy for it just a few months ago now seems to take several back-and-forths to barely approach similar results.
Science content is where I noticed the most degradation. It just stares at me using blank “it’s not in my training data” answers to questions that used to have comprehensive responses a while ago.
I think they’re scaling down the models to make them cheaper to run?
When they first launched the bing AI powered by GPT I used it for everything, then it became pretty clear they nerfed it and I’ve been waiting for a competitor to catch up. Bard’s gotten a little better, but it hallucinates way worse still, making up answers.
I’m secretly hoping for one of these open-source projects like Llama 2 or Orca to lead to a totally unrestricted chatbot even if it’s short-lived
It’s pretty great at writing short utility scripts and code. And it’s fantastic at explaining errors, warnings, and log file dumps.
That’s what I use it for.
Strongly disagree with explaining things, because you don’t know if it’s correct. And you have to validate code it creates, so 🤷♂️
I’ve asked it to produce C for a specific product, and it effectively summarized and reproduced existing example code. Being able to so easily discover a source it used for training revealed the entire trick at once.
I wouldn’t be surprised if it is getting worse. It’s not “real” intelligence that “understands” your questions, and unlike more targeted solutions like GitHub copilot they don’t have a strong use-case focus that can guide their progress.
But I think it’s also that people are coming to terms with what ChatGPT actually can and more importantly cannot do. It’s crazy sometimes to hear what the average person thinks the current iteration of AI’s is capable of.