Category error! I'm sick to the back teeth of wrongheaded comparisons of inanimate objects to humans. It's so rife even colleagues do it. What's next?

> I compared a rock and a person, and challenged them to stay still the longest and the rock won! Wow!

Things thought up by the unhinged & those who wish to dehumanise for profit.

https://tomkahe.com/@GiftArticles/116199021426825296

I was so exasperated by the Donald Knuth thing the other day that I wrote this on a post about it:
There is a rhetorical move here supporting a metaphysical claim that conflates a human activity with the activity of a machine. This, again, is not scientific; it also demands explanation and justification that goes beyond presenting evidence. If someone rides a bicycle down the road, nobody says that the bicycle walked down the road. If someone flies a simulated plane from Boston to Chicago in a flight simulator, nobody says the person traveled to Chicago. Yet somehow when people think with the aid of a certain kind of AI machine, we're meant to refer to that as the machine doing the thing humans do (thinking, solving a problem, inventing, or what have you). We're meant to believe that what the machine is doing is not meaningfully different from what humans do despite the obvious layers of metaphor involved. This conflation is not scientific, it's metaphysical. It demands an explanation and justification that goes beyond just presenting evidence because it is making a claim about how the world works or is structured.
@abucci you're so patient and yes, that was unsettling
@[email protected] It really tries my patience when people say AI has "solved" math or some nonsense like that.

Speaking of patient, though, you're really fighting the good fight 💪
@abucci it's honestly unbelievable people say that, not only false but destructive to maths cc @Iris

@olivia @abucci @Iris How do I put this? I find the "category error" criticism accurate. And it rightly prepares and leads into the socioeconomic criticism of dehumanisation of work and into the sociopsychological deskilling criticism.

What I wonder is: is there any "progress" or "benefit", say, of/for the discipline of mathematics once a certain proof exists (assuming the rest of the discipline manages to continue evolving without deskilling) that we risk blinding ourselves to by just insisting that AI isn't itself "proving" things? This focus on the category error makes it sound like if we managed to avoid anthropomorphisation and use different vocabulary for the function AI plays, the criticism would miss the point.

1/2

@olivia @abucci @Iris

While writing this, it occurs to me that it's naïve to assume what is in brackets above: that the disipline can evolve in an "untainted" way while accepting on a regular basis proofs that have not been conceived by human scholars. But what kind of taint is that? - I have a hunch it's none of the problems mentioned before. Or is it?

I'll re-read your nice paper on "human-centered AI" and think about how its analyses apply to maths as a discipline.

2/2

@[email protected] Hi Andreas, there are lots of ways to consider this question:
is there any "progress" or "benefit"...that we risk blinding ourselves to by just insisting that AI isn't itself "proving" things?
but the first one that springs to my mind is this. Isn't the more interesting, and pertinent, question "is there any progress or benefit that we risk blinding ourselves to by NOT insisting AI isn't proving things"? Your version of the question takes a default optimistic stance that the use of AI is not harmful or obfuscating to human mathematical thought and practice, when we cannot know one way or the other at this stage. I note that this stance is heavily pushed by the US tech sector, and is therefore already worthy of skepticism. Besides that, mathematics has been around for thousands of years; what justifies enthusiasm for such a radical change to our way of practicing it? Aren't we meant to be conservative about our knowledge production systems? I find discussion of these sorts of questions largely absent in the discourse about AI, at least the mainstream discourse, but shouldn't they be central, given what's at stake? We risk doing the equivalent of throwing away our financial security betting on a slot machine because we won once or twice and the guy next to us claims he made a fortune that way.

@[email protected] @[email protected]

@abucci

Hi Anthony, thanks for your response.

The scenario I had in mind was mathematicians of the 2070s still being pretty much like our mathematicians today and those of the past, and looking at the corpus of problems, theorems and proofs established until then, and not caring much about when and in which way a specific proof was introduced. As long as the proof itself is correct as evaluated by those mathematicians themselves. Proving and correctness may lie in the eyes of the human observer, not in the neural network that has outputted the proof. But that does not detract from said correctness at all. I feel uneasy if we focus mainly on how we call this "outputting" or deny that there is a new proof there.

I have already ack'd in the other toot: the scenario is naïve insofar as it assumes the only thing to have changed would be a handful of additional proofs with a different genesis. I'd like to understand the other changes we should expect for the discipline.

@olivia @Iris

@anwagnerdreas @abucci @olivia @Iris

I see it as twofold: a burden of proof argument, and a question about where energies are best spent. For the former, whenever proposing a new tool, the onus is on the person advancing said new proposal to show that it works, or at least works well enough to be worth consideration.

For the second, cranks *could* be right about their wild mathematical claims, but we rightly often reject them out of hand as a timesaving heuristic.

@anwagnerdreas @abucci @olivia @Iris It's impractical to individually evaluate the claims of every crank theorem, and so we largely don't do it.

When it comes to LLM-generated "proofs," I think it's worth comparing to Lean and other formalized proof systems. We rationally have enough confidence in how Lean builds proofs from lower-level theorems and axioms that it's worth approaching Lean-based proofs in good faith. LLMs, by contrast, do not offer any such structure we can use.

@[email protected] You make several great points.

The non-surveyability issue is a big one: https://en.wikipedia.org/wiki/Non-surveyable_proof . A bunch of people rejected the computer-assisted proof of the four-color theorem until it was significantly simplified. Imagine a math LLM spitting out considerably more complicated proofs at a breakneck pace. I argue that eventually such a thing would be indistinguishable from a random string generator. It'd also waste the time and energy of a whole lot of mathematicians in the process, as you pointed out.

We are already seeing code review---human beings checking pull requests etc---being overwhelmed by LLM code generators. Some organizations are abandoning this step as a result. What purpose is served by introducing this kind of dynamics into mathematics, of all things? It's quite strange to me, this bias towards always accelerating everything whenever that's possible to do, regardless of systemic or other risks.

Proofs written in Lean and similar systems have the very big benefit of surveyability, and there's probably a world in which ethically made and constituted LLMs could add beneficial features to such tools.

The analogy to cranks is interesting. I guess in my head it's similar to why we don't throw a handful of leaves up in the air and try to read a proof out of the pattern they make when they fall to the ground (usually!). It's the folly of approaching the problem of finding a needle in a haystack by making the haystack bigger. People love making the haystack bigger for some reason.

@[email protected] @[email protected] @[email protected]

Non-surveyable proof - Wikipedia

@abucci

I found the claim that GPT has 'custom personalities' particularly 🤬-inducing.

Even though I know 'persona' is from the Latin for 'mask'.

@olivia

@abucci @olivia nothing new under the Sun, see: Tibetan prayer wheels

https://en.wikipedia.org/wiki/Prayer_wheel

Prayer wheel - Wikipedia

@[email protected] I don't understand the connection. Can you please elaborate a bit?
@[email protected]
@abucci @olivia the idea that a machine spinning has the same value as human prayer is not that different from "metaphysical claim that conflates a human activity with the activity of a machine".
@[email protected] Ah, OK, I understand now. Thank you!

I know very little about Buddhism, so I am speaking from a position of near total ignorance. That said, it occurs to me that perhaps the person's relation to the wheel is what's important. Someone who thinks they can offload prayer to a physical object is in trouble spiritually already, I would think. At that level the analogy with conflating machine activity with human thought is quite interesting.

@[email protected]
@abucci @olivia Buddhism is non-theistic, and in any case prayer wheels are distinctly Tibetan. Other faiths have mechanical prayer devices like Catholic rosary beads or Islamic tasbih, but they are assistive in nature, not autonomous.