The era of ChatGPT is kind of horrifying for me as an instructor of mathematics... Not because I am worried students will use it to cheat (I don't care! All the worse for them!), but rather because many students may try to use it to *learn*.

For example, imagine that I give a proof in lecture and it is just a bit too breezy for a student (or, similarly, they find such a proof in a textbook). They don't understand it, so they ask ChatGPT to reproduce it for them, and they ask followup questions to the LLM as they go.

I experimented with this today, on a basic result in elementary number theory, and the results were disastrous... ChatGPT sent me on five different wild goose-chases with subtle and plausible-sounding intermediate claims that were just false. Every time I responded with "Hmm, but I don't think it is true that [XXX]", the LLM responded with something like "You are right to point out this error, thank you. It is indeed not true that [XXX], but nonetheless the overall proof strategy remains valid, because we can [...further gish-gallop containing subtle and plausible-sounding claims that happen to be false]."

I know enough to be able to pinpoint these false claims relatively quickly, but my students will probably not. They'll instead see them as valid steps that they can perform in their own proofs.

I see so many adults and professionals talking about how they are using LLMs to deepen their understanding of things, but I think this ultimately dives headlong into the “Gell-Mann amnesia” effect — these people think they are learning, but it only feels that way because there are ignorant enough about the topic they're interested in to not detect that they are being fed utter bullshit.

How shall we answer this? I think it speaks most urgently for people who actually know things, those with "intellectual power", to democratise our knowledge, throw aside the totems that make our fields inaccessible and obscure, and open the gates to the multitudes who wish to learn.

At first it seems like it would be easy to compete with LLMs (because they say only bullshit), but to actually compete with LLMs we need to produce educational materials that actually explain things properly. Any 'proof by intimidation' will immediately send our student to the LLM. The moment you rely on something that you haven't explained, same deal. So it may be that this era has a silver lining: we must finally teach mathematics properly.

@jonmsterling
This goes far beyond mathematics. It's an issue in almost every field I have any serious interest in and I see no reason it should be any different elsewhere. Teaching effectively is *hard*. LLMs promise to do for the student what their teachers all too often don't - explain complex matters in terms they can understand, at a pace they can follow.
@jonmsterling
Back when I was still involved with academia, it was usually students doing their best to help others where the teachers failed. This didn't always go very well, since there was no guarantee that the help you got was all that helpful - a lot of the time, these were people who barely understood the concepts themselves trying to explain them to people who didn't understand at all. LLMs promise to do it better - after all, they have all of the relevant info in their training, right?
@jonmsterling
Well, yes... And no. They probably do have all the relevant info in their training data. After all, they probably scraped Wikipedia wholesale. They don't have any concept of logical consistency or correctness, though. It's all just random garbage formatted to look like an answer. They've gotten incredibly good at doing that - to the point where it's convincing even to experts if we're not looking too closely. That's a problem.
@jonmsterling
Unfortunately, I don't think there's a simple solution - as I said, teaching is hard. I do suspect you're on the right track - better explanations and making resources available in more than one format might help. This takes effort and time, though - and I'm not entirely sure every teacher has that time.