@abananabag @alinanorakari this is a good point to make, though I'm in disagreement:
ChatGPT's area of expertise is *conversation* and nothing else. Everything else is incidental to it's design (though they keep working to improve the quality of it's output). To be precise, it's focus is on creating what a reply would look like.
This is why it gets a reputation at time for being argumentative, because if the response looks upset it thinks it's looking at the start of an argument so it thinks the reply would be argumentative.
If you ask it for prime numbers, it knows the response looks like a bunch of numbers.
It does well with programming because code is just another sort of language pattern.
Likewise with answering questions about general information because the best looking response is an accurate one.
But that's also why it hallucinates (makes up false information) because "I don't know" is not considered a good response in the system.