A small anecdote in relation to a recent coffee conversation I had with @TaliaRinger (which they relate over at https://twitter.com/TaliaRinger/status/1681410191278080000 ): Yesterday I spoke with a children's book author who was interviewing me as part of a series she was writing on contemporary scientists. She freely admitted that she did not have great experiences with her math education at an under-resourced school and chose very early on to focus on writing instead. Nevertheless we had an excellent conversation about many mathematical topics that she was not previously familiar with, such as proof by contradiction, Cartesian coordinates, Mobius strips, or compressed sensing, all of which she found fascinating (and said she would read up on more of these topics herself after our interview). I posed to her the isoperimetric problem (using the classic story of Queen Dido from the Aeneid as the intro) and she correctly guessed the correct shape to maximize area enclosed by a loop (a circle), and instantly grasped the analogy between this problem and the familiar fact that inflated balloons are roughly spherical in shape. I am certain that had her path turned out differently, she could have attained far greater levels of mathematical education than she ended up receiving.

This is not to say that all humans have an identical capability for understanding mathematics, but I do strongly believe that that capability is often far higher than is actually manifested through one's education and development. Sometimes the key thing that is missing is a suitable cognitive framework that a given person needs to align mathematical concepts to their own particular mental strengths.

Talia Ringer on Twitter

“Terry Tao and I spoke over coffee for like two hours yesterday, in part about diversity in how people think about math. We both agreed that people who hit these walls early mostly don't learn the way of thinking about math that works for them. It's an educational failure”

Twitter
@TaliaRinger In this specific case, I guessed (correctly, as it turned out), that framing mathematical concepts and problems as narratives (ideally involving children) would be particularly effective in communicating mathematics to a writer of children's books. For instance, in addition to the Dido story, I could explain proof by contradiction through the story of young children challenging each other at recess to name the largest number, until one realized that because they could always add one to the number the other child proposed, that there was in fact no largest number. Compressed sensing I could explain due to the need to have a child sit still for several minutes in an MRI scan before the modern CS algorithms became implemented in the machines. Mobius strips I could explain via a proposed children's activity of cutting such strips to encourage mathematical exploration. These were handpicked examples, but in general I think a lot can be done with creatively reframing the way we present a given mathematical topic.
@tao @TaliaRinger Imagine trying to find a "suitable cognitive framework" for each of 175 individual students every day for every topic and you will begin to understand the challenge of being a K-12 classroom teacher!
@phonner @TaliaRinger I have hope that AI tools could partially alleviate this problem in the future. Already there are promising experiments such as Khan Academy's "Khanmigo" https://www.khanacademy.org/khan-labs or Upward Mobility Foundation's "uME" https://www.theumf.org/ (full disclosure: I am an advisor for the latter project). Of course, the majority of students will still benefit the most from personalized attention from expert human teachers, but as you say this is a limited resource.
Khanmigo Education AI Guide | Khan Academy

We make education free and accessible for all. By joining Khan Labs, you can help us develop new features that will empower hundreds of millions of learners around the world!

Khan Academy

@tao @phonner My Uber driver back from the beach yesterday mentioned that he is preparing to go back for his masters, and is taking linear algebra online right now. He has had trouble understanding the way concepts are presented in class, but has been using ChatGPT to ask about concepts in a way that makes sense for him, as a supplement to the class. He said the professor is too busy to give him this kind of personal attention for the amount of time he wants it (about an hour weekly).

Like everything with AI, this both worries and excites me. We have to be careful that it doesn't become a replacement for more personalized attention when that could be available, and we have to be careful to build tools that are either correct or that help users calibrate to their untrustworthiness and think critically about automatically generated responses. If we can do both of those things, I think it can be great. I still worry there are not strong economic incentives to do those two things; often replacement with inferior automation is unfortunately profitable.

@TaliaRinger @tao @phonner this spring I took a computer science course on compilers, and found ChatGPT super helpful. I could ask it about details about a theorem and its large role in the associated theory and get a sensible, very helpful response.

It was also super helpful with Java. I know mostly C# as my "big, standard OOP" language, and in many cases I knew exactly what I needed to write in my Java code, but just needed to know the name of the API/class or the usual Java idiom.

@ddrake @tao @phonner I'm curious how often it was correct, and how easily you were able to tell when it was incorrect, in the context of compilers. Also, what sorts of questions you asked it about compilers, if you have any examples. (I am teaching undergrad PL & compilers in the fall.)

@TaliaRinger @ddrake I have same use for writing rust code, lookup rust document is too complicated, the organization of it is not what I familiar with.

Therefore, I use chatgpt and ask small piece of code like how to convert 4bytes to int64, it’s usually correct! Since task are deterministic, if it contains errors, I can fix them easily.

I guess that would be fun to see result of asking harder question like how a graph coloring algorithm good.

@dannypsnl @TaliaRinger "usually correct" Yes!

ChatGPT is like asking a colleague who's very knowledgeable, but doesn't have their head exactly in the context you're working. So you ask them for advice, trying to describe the problem, and even if they answer confidently, you know that because they haven't been working with *this* exact bug report or section of code, you need to take their answer and do a little thinking to be sure it'll work.

People are telling all these stories about copy and pasting code from ChatGPT, but the truth is, we do that from our colleagues all the time. "Hey Mike, when using the Whatever API, how do you do X?" Mike says "you just call whatever.do_the_thing with the first argument set to blah blah blah." You do that, because Mike is super smart, but then later learn that for your precise snippet of code, you need to do something slightly different.

You need to think about Mike's response; you need to think about ChatGPT's response.

@ddrake @TaliaRinger Your colleague explain is great!

I was feeling it's also like Yellow Duck, since I have to stop and describe the question in natural language, that process helps me frame the problem, even solve the problem!

@ddrake @dannypsnl Forgive if typos---eyes dilated and cannot see. I think the "we copy and paste code from people/StackOverflow too" thing is a bit of a worrying comparison, because these chatbots tend to mess up in ways that are quite different from how people mess up, and so require different training to properly evaluate

@TaliaRinger @tao @phonner this is my favorite interaction, on Chomsky normal form and why it only has two productions:

https://chat.openai.com/share/6ed8e1e7-ef7c-4311-b726-23d8ee8ac8fb

I would also often use ChatGPT to explain something I already thought I knew; I just wanted to get another explanation. Say the lecturer explained topic A. I think "okay, yes, I completely understand that, but I suspect there may be something idiosyncratic or unique about this explanation." So I would ask and see if, say, everyone else thinks about topic A in a slightly different way. (This is especially helpful when the teacher says "in this class..." or "for this assignment, you just need to know...")

ChatGPT

A conversational AI system that listens, learns, and challenges

@TaliaRinger @tao @phonner here's another conversation on Julia and dataframes where it got things wrong: https://chat.openai.com/share/f0f4b11a-878f-4030-b6e0-5e5d19fe1b74

I'm not seeing any of the conversations where it just got things wrong, and I said so, and then apologized and gave it another go. Weird...

ChatGPT

A conversational AI system that listens, learns, and challenges