The era of ChatGPT is kind of horrifying for me as an instructor of mathematics... Not because I am worried students will use it to cheat (I don't care! All the worse for them!), but rather because many students may try to use it to *learn*.

For example, imagine that I give a proof in lecture and it is just a bit too breezy for a student (or, similarly, they find such a proof in a textbook). They don't understand it, so they ask ChatGPT to reproduce it for them, and they ask followup questions to the LLM as they go.

I experimented with this today, on a basic result in elementary number theory, and the results were disastrous... ChatGPT sent me on five different wild goose-chases with subtle and plausible-sounding intermediate claims that were just false. Every time I responded with "Hmm, but I don't think it is true that [XXX]", the LLM responded with something like "You are right to point out this error, thank you. It is indeed not true that [XXX], but nonetheless the overall proof strategy remains valid, because we can [...further gish-gallop containing subtle and plausible-sounding claims that happen to be false]."

I know enough to be able to pinpoint these false claims relatively quickly, but my students will probably not. They'll instead see them as valid steps that they can perform in their own proofs.

I see so many adults and professionals talking about how they are using LLMs to deepen their understanding of things, but I think this ultimately dives headlong into the “Gell-Mann amnesia” effect — these people think they are learning, but it only feels that way because there are ignorant enough about the topic they're interested in to not detect that they are being fed utter bullshit.

How shall we answer this? I think it speaks most urgently for people who actually know things, those with "intellectual power", to democratise our knowledge, throw aside the totems that make our fields inaccessible and obscure, and open the gates to the multitudes who wish to learn.

At first it seems like it would be easy to compete with LLMs (because they say only bullshit), but to actually compete with LLMs we need to produce educational materials that actually explain things properly. Any 'proof by intimidation' will immediately send our student to the LLM. The moment you rely on something that you haven't explained, same deal. So it may be that this era has a silver lining: we must finally teach mathematics properly.

@jonmsterling
The more I work in edu, and the more I read history, the less I am convinced it's possible to break free from LLMs.

At least here in the US, the incentives for getting an education are all wrong. Rarely does anyone ever concern themselves with actually learning, but rather they're just trying to get through the process as quickly and efficiently as they can so they can move on to a job.

Unless we sort this out, and make people actually interested in the learning process, we definitely won't be able to reckon with this tech.

@mav @jonmsterling History will course correct. Places with actually open learning will advance beyond the US. It will take a lot of time due to the sheer amount of resources US has amassed but it will happen eventually. This is not a fight US has to win, it's a process of achieving a balance in the whole world, hopefully without dictators coming out on top.
@cohentheblue
I guess what I was trying to say is that this is where part of the problem comes from in the US, but it is definitely not the only source of the problem. ChatGPT addiction seems to be fairly universal
@jonmsterling

@mav @jonmsterling Knowing stats doesn't quite feel important to use my time for. I'd rather speak against LLM usage and for learning the slower but thorough way which improves people more in the end. Different messages appeal to different people.

F.D Signifier on youtube said what I agree with. We need to make cool shit and then gradually our propaganda and message will win over the convenient, artificial stuff. Can't just repeat the message, first something interesting and cool.