I asked ChatGPT about primes ending in 2 to make it prove a point and it proved the point far better than I could have hoped for.

Please do not be a fool who trusts ChatGPT with anything outside your field of expertise, and even then double or triple check what it tells you if you must use it.

@alinanorakari
That's pretty funny. Lest anyone think this was an unfair test of ChatGPT, mathematics is one of its core domains of expertise. (At least that's what ChatGPT told me, but maybe I should know better than to believe a pathological liar twice.)

A while ago I asked it for the 100th digit of π and it was hilariously aggressive that there is no 100th digit. It seemed to be basing that on the fact that π doesn't repeat and there are less than 100 distinct digits, but I think I broke it when I asked about base 100. It eventually informed me that there isn't even a first digit of pi, either.

I will note that the answers it gave you are both (a) shorter and (b) less arrogant sounding. ChatGPT previously was incredibly rude, unable to admit, much less contemplate, possibly being wrong.

I think the problem was that they trained it on transcripts from very smart people. It learned to mimic their charmless assertions and condescending style, but with none of their knowledge.

@abananabag @alinanorakari this is a good point to make, though I'm in disagreement:

ChatGPT's area of expertise is *conversation* and nothing else. Everything else is incidental to it's design (though they keep working to improve the quality of it's output). To be precise, it's focus is on creating what a reply would look like.

This is why it gets a reputation at time for being argumentative, because if the response looks upset it thinks it's looking at the start of an argument so it thinks the reply would be argumentative.

If you ask it for prime numbers, it knows the response looks like a bunch of numbers.

It does well with programming because code is just another sort of language pattern.

Likewise with answering questions about general information because the best looking response is an accurate one.

But that's also why it hallucinates (makes up false information) because "I don't know" is not considered a good response in the system.

@shiri @abananabag it does well with code syntax over small blocks, it really struggles with global syntax (e.g. type safety, concurrency, object lifetimes, immutability) as well as semantics and it knows nothing about pragmatics
@alinanorakari @abananabag much the same as it does over longer conversations lol