The era of ChatGPT is kind of horrifying for me as an instructor of mathematics... Not because I am worried students will use it to cheat (I don't care! All the worse for them!), but rather because many students may try to use it to *learn*.

For example, imagine that I give a proof in lecture and it is just a bit too breezy for a student (or, similarly, they find such a proof in a textbook). They don't understand it, so they ask ChatGPT to reproduce it for them, and they ask followup questions to the LLM as they go.

I experimented with this today, on a basic result in elementary number theory, and the results were disastrous... ChatGPT sent me on five different wild goose-chases with subtle and plausible-sounding intermediate claims that were just false. Every time I responded with "Hmm, but I don't think it is true that [XXX]", the LLM responded with something like "You are right to point out this error, thank you. It is indeed not true that [XXX], but nonetheless the overall proof strategy remains valid, because we can [...further gish-gallop containing subtle and plausible-sounding claims that happen to be false]."

I know enough to be able to pinpoint these false claims relatively quickly, but my students will probably not. They'll instead see them as valid steps that they can perform in their own proofs.

I see so many adults and professionals talking about how they are using LLMs to deepen their understanding of things, but I think this ultimately dives headlong into the “Gell-Mann amnesia” effect — these people think they are learning, but it only feels that way because there are ignorant enough about the topic they're interested in to not detect that they are being fed utter bullshit.

How shall we answer this? I think it speaks most urgently for people who actually know things, those with "intellectual power", to democratise our knowledge, throw aside the totems that make our fields inaccessible and obscure, and open the gates to the multitudes who wish to learn.

At first it seems like it would be easy to compete with LLMs (because they say only bullshit), but to actually compete with LLMs we need to produce educational materials that actually explain things properly. Any 'proof by intimidation' will immediately send our student to the LLM. The moment you rely on something that you haven't explained, same deal. So it may be that this era has a silver lining: we must finally teach mathematics properly.

@jonmsterling IMO you have to ensure that there's a full loop from information to synthesis to application in learning - when learning from instructors, when learning from trial and error, or when learning from an untrustworthy source (like an LLM, but also learning from other students).

Basically - when you go to apply the things and it doesn't work, that's how you filter out the BS.

If the teaching model never asks for that or provides tools for self-check, then mistakes can get locked in.

@jonmsterling If you're learning things to use them and not just to be able to say that you know them, then whatever your source of imperfect information you at least have the motive and methodology to eventually refine it.

The challenge from an instructor's point of view is then how to make it clear how the things they are teaching provide affordances for the student to do something they might be interested in doing (which the instructor does not have direct knowledge of).

@nichg @jonmsterling, you make an important point. But it is hard to execute. Going the full cycle from theory to application while making sure the application has the various pitfalls that it needs the more advanced theory - it takes so much time!

A 7.5 ECTS course could easily be extended to 15 credits without really adding material if you want the students to learn where all the subtleties occur.

Those paying for the education would percieve it as students learning half as efficiently. 😞

@jonmsterling teaching properly is so key... current systems don't teach meta cognition or the ability for students to assess their own learning, which imo is why students tend to re-watch recorded lectures / read the output of chatgpt etc, which are poor ways to learn but are so popular because students aren't taught any other way.

@koronkebitch @jonmsterling If you leave a big enough pedagogy hole, it can get filled with almost anything. Seems like everyone knows the solution is to respect kids' awareness of and agency among pedagogies (I know a guy who ran a pre-school using consensus), and yet every wave of reform fails to actually do that.

IMHumbleO it's political and financial perspectives & decisions (at least as much as power-over cultural legacies) which prevent "mainstream" schools from really centering students.

@koronkebitch @jonmsterling The most important aspect that forces people to learn that way is lack of time. My mathematics studies were ruined by being forced to learn for exams. This ruined any interest I had in mathematics as a youth since I understood that I didn't actually understand and I had no will or energy left in the day to learn more on my own.
@koronkebitch @jonmsterling I think it is tried, but students do not care... Do not want to think

@jonmsterling 🤔 I don't think it's easy to compete with LLMs, even if they say bullshit: they provide a one-on-one interaction, and I suspect it's a strong part of their appeal. If someone who grasps the material interacted with small enough groups of students, they'd make strides. But this isn't compatible with the current economic policies in many countries (or only for highly selected—and thus privileged—groups of students).

That doesn't mean there isn't room for improvement of course.

@jonmsterling It reminded me of when Musk bought Twitter, and all of a sudden people who thought "Oh, he's smart; that's why he owns Tesla and SpaceX" suddenly learned "Oh; it wasn't that he was smart - it's that he had money, and kicked out the smarter people.".

It wasn't clear to them until we saw him messing up in our field that it was more than possible he just messed up in the other fields, and we just didn't have the clear subject-matter knowledge to see it.

Man, if that happened it might even be worth it!

@jonmsterling

@jonmsterling Quite right. LLMs optimise proximity to training data, which means they *sound like* someone knowledgeable.
The don't optimise for truth or insight or knowledge or consistency.

The problem is not the LLM (it is optimising what is asked of it) the problem is the companies promoting LLMs as a source of knowledge. It is essentially false advertising.

Same with AI art. Blending is not the same as understanding.

@jonmsterling There is another lesson that I think is harder for us to get across that I was reminded of by recently seeing this
Mathoverflow answer (https://mathoverflow.net/a/51868/1199) again:

In a pressure cooker culture of high stakes, high anxiety, and easy distraction, we need to find ways to encourage people that seriously struggling with something will prove thousands of times more valuable than empty fast food AI "summaries". This is much harder to do currently, but fits with wider trends about how people struggle to read books any more, struggle to be alone with just their own thoughts, struggle to face what is at their current limit.

Do you read the masters?

I often hear the advice, "Read the masters" (i.e., read old, classic texts by great mathematicians). But frankly, I have hardly ever followed it. What I am wondering is, is this a princ...

MathOverflow
@boarders @jonmsterling Do you have a source for the claim that people struggle to read books nowadays or any of the other claims after this one?

@zvavybir @boarders @jonmsterling

This is a complex issue and evidence for social trends and facts is of course very difficult to nail down, given how sensitive it is to methodology and interpretation, but here are some supporting sources:

- "Collective attention spans" shrinking: https://www.nature.com/articles/s41467-019-09311-w
- "Americans Reading Fewer Books Than in Past": https://news.gallup.com/poll/388541/americans-reading-fewer-books-past.aspx
- "Depression, anxiety, and daytime dysfunction" correlated with higher smart phone use: https://akjournals.com/view/journals/2006/4/2/article-p85.xml
- "media multitasking" which is more common for younger people, is "mostly correlated with negative mental health": https://pmc.ncbi.nlm.nih.gov/articles/PMC8598050/

I don't mean to suggest these are definitive, but IMO there is ample evidence currently to support taking an interest and addressing this as a something that is potentially a real problem. Personally, based on my own personal experience, it seems pretty clear that many forces in modern socio-technical systems are aligned to stifle deep reflection and critical thought, and to drive us into modes of constant distraction. This is something that was well diagnosed already by early critical theorists.

Accelerating dynamics of collective attention - Nature Communications

The impacts of technological development on social sphere lack strong empirical foundation. Here the authors presented quantitative analysis of the phenomenon of social acceleration across a range of digital datasets and found that interest appears in bursts that dissipate on decreasing timescales and occur with increasing frequency.

Nature
The Elite College Students Who Can’t Read Books

To read a book in college, it helps to have read a book in high school.

The Atlantic

@boarders @jonmsterling

Absolutely. "Seriously struggling with the material" vs "easy AI summaries". I've been reflecting a lot about this lately, I think of it as "student as "author" vs "student as editor". Students need to practice the subject, not collate findings about it.

https://mastodonapp.uk/@the_roamer/113728692997818394

I agree, this is related to more general cultural trends that make it harder for us as individuals to "be alone with our own thoughts".

the roamer (@[email protected])

Thinking about the impact of genAI on student learning. Learning occurs when a student struggles with the material and thus constructs the truth of the subject for herself. (The teacher's role is to provide the right challenges.) GenAI offers a poisonous shortcut. The student collates AI-generated texts and acts as their editor, without ever being author. She accumulates answers that aren't anchored in her own practice. True or false, these answers aren't hers. #genAI #learning #pedagogy

Mastodon App UK
@jonmsterling
This goes far beyond mathematics. It's an issue in almost every field I have any serious interest in and I see no reason it should be any different elsewhere. Teaching effectively is *hard*. LLMs promise to do for the student what their teachers all too often don't - explain complex matters in terms they can understand, at a pace they can follow.
@jonmsterling
Back when I was still involved with academia, it was usually students doing their best to help others where the teachers failed. This didn't always go very well, since there was no guarantee that the help you got was all that helpful - a lot of the time, these were people who barely understood the concepts themselves trying to explain them to people who didn't understand at all. LLMs promise to do it better - after all, they have all of the relevant info in their training, right?
@jonmsterling
Well, yes... And no. They probably do have all the relevant info in their training data. After all, they probably scraped Wikipedia wholesale. They don't have any concept of logical consistency or correctness, though. It's all just random garbage formatted to look like an answer. They've gotten incredibly good at doing that - to the point where it's convincing even to experts if we're not looking too closely. That's a problem.
@jonmsterling
Unfortunately, I don't think there's a simple solution - as I said, teaching is hard. I do suspect you're on the right track - better explanations and making resources available in more than one format might help. This takes effort and time, though - and I'm not entirely sure every teacher has that time.

@DL1JPH @jonmsterling Nothing quite as intense for learning as having to teach —

Some of my best subject learning were done when I was paid to give the class. As an undergraduate, it was such a wild cheat. Weird AF. Felt wrong but you got so good. You were shamed into it.

@jonmsterling I would never use an LLM to try to understand something because I know superficially how LLMs work. I use them to generate text I‘m to lazy to write myself but that‘s it.
@jonmsterling The right mental model for interacting with an LLM is to treat it like a person being tortured: It will say whatever is most likely to make you stop, the only trustworthy answers are ones that you can instantly validate.
@david_chisnall @jonmsterling may I use this as a quote in something I’m writing on the subject, you succinctly wrote what I’ve spent paragraphs doing 
@david_chisnall @jonmsterling if you can instantly validate the answers why ask the question?
@david_chisnall @jonmsterling Next time, I'll make sure to ask how many lights there are.
@david_chisnall @jonmsterling From what I understand of the human side of LLM training, that analogy takes an even darker turn.
@jonmsterling sadly this is a lost battle imho. Everyone and their mothers will go to the path that they perceive as easy and rewarding...
Otherwise, we would have invested everything we spent in graphic cards into making Wikipedia great.
The problem (again, imho) is more about how the grown-up exemplify (or not) effort and work in every field and aspects of our lives, to the ones whose job is still about learning...
@jonmsterling
I forget where I first saw it, but yeah, "Gell-Mann Amnesia As A Service"

@jonmsterling What I've noticed about my use of LLM to introduce me to high-level concepts in areas I struggle with is the non-judgemental and consistently available nature of it.

It's ... "easy" to ask it questions, not because I think it'll be right (I'm a professional programmer, I know what the limitations are) but because of the amount of trauma I have from people getting frustrated with me and making me feel awful for not understanding.

I don't know how to teach people to replicate that

@aurynn @jonmsterling That's easy. You are your own algorithm, like on mastodon. You ask those who have the patience AND time. If people don't have time for you, you'll need to spend more of your time learning the hard and slow way, reading about the topics on forums etc. where you see the full context of what people say and have a better sense of what could be true, what not.

@cohentheblue @aurynn @jonmsterling “RTFM” was something we prided ourselves on with tongue in cheek, at the programming society

It had its benefits

It had its problems

But it needs to be done. Well, after TFM is written. Okay I didn’t mean this to become a rant about lack of documentation.

@whophd @cohentheblue @jonmsterling as someone who was on the receiving end of RTFM numerous times … no, it wasn’t tongue-in-cheek.

@aurynn @cohentheblue @jonmsterling I must readily agree there’s a limit

There’s a limit both ways. I wonder how to define the minimum and maximum that’s reasonable for someone to be expected to skill up.

I’m totally sympathetic to preventing exclusion, and just as sympathetic to preventing someone from being forced to do another person’s work.

It’s a perennial discussion back at the society …

@whophd @aurynn @cohentheblue @jonmsterling manuals, assuming they are actually available, are not written for people to learn from. they are for people who know what is going on but need details. RTFM was and is lazy and generally abusive. much like "let me google that for you"
@mensrea @aurynn @cohentheblue @jonmsterling Hmm I think you’re right. Though I fear it’s been too long since I’ve been harassed by a child asking questions they don’t actually want.
@cohentheblue @jonmsterling it’s a bit wild you came into my mentions to say “skill issue” and do the same toxic and shitty things that mean that using LLMs can be a better choice for me.

@aurynn @jonmsterling No. I said you don't get something for nothing and it is what it is.

Easy wrong answers are in my opinion worthless. If you find burning energy for suspect answers worthwhile, I do not approve because you're not quite in the harmless personal choice realm any more. Never the less I won't even try to stop you since it's a very minor case in comparison to corpo greed. I feel mine was a very harmless / neutral reaction.

@cohentheblue your comment demonstrated the “judgmental & make people feel awful (that it’s not easy) for them” behavior & response that @aurynn commented on.

Learning, not practicing existing skill capability or expertise, is not easy, can be discouraging, demotivating & challenging until you develop some of the target skill capability. Especially if it’s a bio-mechanical motor skill. Beyond age 7, there’s no shortcuts to acquiring & retaining w/o repeated practice.

@jonmsterling

@dahukanna @aurynn @jonmsterling My use of the word easy can be misunderstood, sorry.

Easy as in there's no other good way, a better way does not exist in current conditions. Ofcourse everyone is different.

My point is if one wants reasonable certainty that they learn correct info, there's no other option.

I'm constantly learning, including motor skills. Nothing is impossible with time and patience. Depends on what people prioritize in life. The more you learn, the better you get at it.

@cohentheblue @aurynn
I agree that accepting any information provided by a digital tool, without critical, deliberate reflection & validation to make it “useable knowledge” should not be people’s default behavior, so how can we make this behavior to default.
I’m presuming your use of “That’s easy” phrase was to say “That’s obvious (to you)”.
I’d recommend providing an explanation for the person who does not find it easy/obvious like “and here’s the evidence why …”
@aurynn @jonmsterling It makes sense to “ask like no one’s watching” presuming an incapacity for personal judgment on the part of the tech we interact with. What this misses is that the judgment is there, just on the other side of the tech. The business model that supports LLMs and most other digital tech today, and the people who pursue that business model, are constantly and intricately judging all of us according to how much worth we can provide to them.

@jonmsterling Sigh, in my own field of expertise (computer science)...

There's plenty of science communicators tackling it, but I've got complaints with so much of their pedagogy! The near-exclusive focus on Moore's Law & The Turing Test leaves people ripe for that misinformation!

On the otherhand...
Its *easier* for us to compete against the interactivity of ChatGPT, and I do see the public computer science education improving on YouTube.

I suspect people in other fields can gripe too...

@jonmsterling There has been some good research done and papers written about this topic in the last year: https://arxiv.org/abs/2404.03502
AI and the Problem of Knowledge Collapse

While artificial intelligence has the potential to process vast amounts of data, generate new insights, and unlock greater productivity, its widespread adoption may entail unforeseen consequences. We identify conditions under which AI, by reducing the cost of access to certain modes of knowledge, can paradoxically harm public understanding. While large language models are trained on vast amounts of diverse data, they naturally generate output towards the 'center' of the distribution. This is generally useful, but widespread reliance on recursive AI systems could lead to a process we define as "knowledge collapse", and argue this could harm innovation and the richness of human understanding and culture. However, unlike AI models that cannot choose what data they are trained on, humans may strategically seek out diverse forms of knowledge if they perceive them to be worthwhile. To investigate this, we provide a simple model in which a community of learners or innovators choose to use traditional methods or to rely on a discounted AI-assisted process and identify conditions under which knowledge collapse occurs. In our default model, a 20% discount on AI-generated content generates public beliefs 2.3 times further from the truth than when there is no discount. An empirical approach to measuring the distribution of LLM outputs is provided in theoretical terms and illustrated through a specific example comparing the diversity of outputs across different models and prompting styles. Finally, based on the results, we consider further research directions to counteract such outcomes.

arXiv.org

@jonmsterling "to democratise our knowledge, throw aside the totems that make our fields inaccessible and obscure, and open the gates to the multitudes who wish to learn." ✊

See also: https://fedihum.org/@lavaeolus/113556620130330137

Henrik Schönemann (@[email protected])

I had the opportunity to post one of my all time favorite quotes; it needs to be reposted from time to time: "Those with access to these resources — students, librarians, scientists — you have been given a privilege. You get to feed at this banquet of knowledge while the rest of the world is locked out. But you need not — indeed, morally, you cannot — keep this privilege for yourselves." #AaronSwartz, 2008, Guerrilla Open Access Manifesto #OpenAccess #OpenKnowledge https://openbehavioralscience.org/manifesto/

FeDiHum

@jonmsterling
The more I work in edu, and the more I read history, the less I am convinced it's possible to break free from LLMs.

At least here in the US, the incentives for getting an education are all wrong. Rarely does anyone ever concern themselves with actually learning, but rather they're just trying to get through the process as quickly and efficiently as they can so they can move on to a job.

Unless we sort this out, and make people actually interested in the learning process, we definitely won't be able to reckon with this tech.

@mav @jonmsterling History will course correct. Places with actually open learning will advance beyond the US. It will take a lot of time due to the sheer amount of resources US has amassed but it will happen eventually. This is not a fight US has to win, it's a process of achieving a balance in the whole world, hopefully without dictators coming out on top.
@cohentheblue
I guess what I was trying to say is that this is where part of the problem comes from in the US, but it is definitely not the only source of the problem. ChatGPT addiction seems to be fairly universal
@jonmsterling

@mav @jonmsterling Knowing stats doesn't quite feel important to use my time for. I'd rather speak against LLM usage and for learning the slower but thorough way which improves people more in the end. Different messages appeal to different people.

F.D Signifier on youtube said what I agree with. We need to make cool shit and then gradually our propaganda and message will win over the convenient, artificial stuff. Can't just repeat the message, first something interesting and cool.

@jonmsterling
Do you mind if I share this elsewhere? (With credit of course)

@jonmsterling It is important to understand that Large LANGUAGE Models are OBVIOUSLY incapable of true artificial intelligence and that the people pushing them are not even working on the problem of true AI. They haven't the simplicity of reasoning necessary to do so. Once one understands what the following argument—which everyone is too "modern" to have noticed—one might be better equipped to dissuade students from going to LLMs.

The argument is simple and proves LLMs are a dead end...

@jonmsterling Here we go—

Human intelligence is plainly almost exactly the same thing as chimpanzee intelligence. We are closely related. And most things a human can do so can a chimp.

But chimpanzees have no language whatsoever. NOT ONE BIT OF IT.

Therefore human intelligence cannot be modeled with language!

Q.E.D.

And in fact mathematics in particular is largely visualization and proprioceptive reasoning, not linguistic. This is why LLMs babble when asked simple questions about math.

@jonmsterling I have run into the sort of person who says they have a condition where they "think" only in words. So I ask them if they are incapable of feeding themself. They most likely will just go away, because the LLM crowd are dead set on their delusion.

People have forever mistaken their internal verbal dialogue with their intelligence. In fact most of our intelligence is completely unconscious. It is quite possible to solve a math problem in one's sleep and wake up with the answer.

@jonmsterling It means LLMs will never get any better. They can only perhaps look like it by reaching into larger databases for canned knowledge.

Language is an auxiliary faculty, whose primary role is to let humans record knowledge. It is something no animal species can do, and makes us fundamentally different. But the animal species otherwise have essentially the same faculties. If LLMs cannot reproduce the intelligence of a cat, then they cannot do a human, either.