GPT-4 is getting worse over time, not better.
GPT-4 is getting worse over time, not better.
It’s a rat race. We want to get to the point where someone can say “prove P != NP” and a proof will be spat out that’s coherent.
After that, whoever shows it’s coherent will receive the money.
Not really understanding the distinction in the beginning, I asked chatgpt a lot about math and gave it formulas to solve, asked it questions about very large numbers and things like this. I then did the DD on its answers, which overwhelmingly turned out to be spot on.
Then, like is being reported elsewhere, the thing got progressively worse at math and its results seemed to get randomized. I could ask it the very same math questions as I did in the beginning, but now the answers were garbage. It has 100% been ‘smoothed’ out in the thinking department, likely because people were finding ways to monetize it that its creators hadn’t even thought of, so they backpedaled.
Because it’s something completely new that they don’t fully understand yet. Computers have been good at math since always, everything else was built up on that. People are used to that.
Now all of a sudden, the infinitely precise and accurate calculating machine is just pulling answers out of its ass and presenting them as fact. That’s not easy to grasp.