I don't want to formalize any of my work on mathematics. First because, as Emily Riehl notes, formalization tends to impose consensus. And second, because I find it boring. It steals time from creative thought to nail things down with more rigidity than I need or want.

Kevin Buzzard says "It forces you to think about mathematics in the right way." But there is no such thing as "the" right way to think about mathematics - and certainly not one that can be forced on us.

https://www.quantamagazine.org/in-math-rigor-is-vital-but-are-digitized-proofs-taking-it-too-far-20260325/

In Math, Rigor Is Vital. But Are Digitized Proofs Taking It Too Far? | Quanta Magazine

The quest to make mathematics rigorous has a long and spotty history — one mathematicians can learn from as they push to formalize everything in the computer program Lean.

Quanta Magazine

@johncarlosbaez
> formalization tends to impose consensus

I'm not sure people are getting your point. The way I see it, formalization tends to make people subconsciously think in terms of The One And Only Truth -- and thus consensus.

But we know that in mathematics, there are always other paths, and truths follow from axioms / axiom schemas, which themselves can vary.

Or perhaps I'm missing your point, too.

@dougmerritt - You got my point. Working in Lean or any computer system for formalization, you need to submit to the already laid down approaches, or spend a lot of time rewriting things.

I added a quote from Kevin Buzzard to emphasize the problem:

Kevin Buzzard says "It [formalization? Lean?] forces you to think about mathematics in the right way."

But there's no such thing as "the" right way!

@johncarlosbaez @dougmerritt

This is just the beginning.

Current systems are the FORTRAN and Pascal of proof systems; they are for building pyramids--imposing, breathtaking, static structures built by armies pushing heavy blocks into place.

What we need is for someone to invent the Lisp of proof systems. Something that helps individuals to think new thoughts.

@maxpool @johncarlosbaez
Yes, well, moving past John's point:

Easier said than done. Current things like Lean are lots better than the systems of years ago, but -- do you have any specific ideas?

I used to follow that area of technology, but I somewhat burned out on it. For now, Terry Tao et al is getting good mileage out of Lean.

I suppose there's some analogy with the period of shift from Peano axioms to ZFC and beyond.

@dougmerritt - I follow some people who are into formalization, logic and type theory more sophisticated than Lean: @MartinEscardo, @andrejbauer, @pigworker and @JacquesC2 leap to mind. They're the ones to answer your question.

@johncarlosbaez @dougmerritt @MartinEscardo @JacquesC2 @pigworker Somewhat unexpectedly, I find myself on the same side as @xenaproject on this one, I suppose because I read "the right way" differently from @johncarlosbaez

Formalized mathematics makes us think "the right way" in the sense that it requires mental hygiene, it encourages better organization, it invites abstraction, and it demands honesty.

Formalized mathematics does not at all impose "One and Only Truth", nor does it "nail things down with rigidity" or "impose concensus". Those are impressions that an outsider might get by observing how, for the first time, some mathematicians have banded together to produce the largest library of formalized mathematics in history. But let's be honest, it's miniscule.

Even within a single proof assistant, there is a great deal of freedom of exploration of foundations, and there are many different ways to formalize any given topic. Not to mention that having several proof assistants, each peddling its own foundation, has only contributed to plurality of mathematical thought.

Current tools are relatively immature and do indeed steal time from creative thought to some degree, although people who are proficient in their use regularly explore mathematics with proof assistants (for example @MartinEscardo and myself), testifying to their creative potential.

Finally, any fear that Mathlib and Lean will dominate mathematical thought, or even just formalized mathematics, is a hollow one. Mathlib will soon be left in the dust of history, but it will always be remembered as the project that brought formalized mathematics from the fringes of computer science to the mainstream of mathematics.

@andrejbauer - "Formalized mathematics makes us think "the right way" in the sense that it requires mental hygiene, it encourages better organization, it invites abstraction, and it demands honesty."

I did read it differently. I was really worrying that Kevin meant formalizing mathematics in *Lean* forces us to think the right way. But in fact I don't think formalizing mathematics at all makes us think "the" right way. It has good sides, which you mention, so it's *a* right way to do mathematics. But it also has bad sides. Mostly, it doesn't encourage radical new ideas that don't fit well in existing formalisms. Newton, Euler, Dirac, Feynman and Witten are just a few of the most prominent people who broke out of existing frameworks, didn't think formally, and did work that led to a huge growth of mathematics. If you say "those people are physicists, not mathematicians", then you're slicing disciplines differently than me. I find their ideas more mathematically interesting than most mathematics that fits into existing frameworks.

@dougmerritt @MartinEscardo @JacquesC2 @pigworker @xenaproject

@johncarlosbaez

I don't agree that working with a proof assistant will reduce the chance that we'll come up with radical new ideas.

It's not at all difficult for me to picture someone like Grothendieck, who also broke out of many existing formalisms, writing his own library from scratch in order to express his ideas -- In many ways this is exactly what he did! Though of course he (and his collaborators) wrote a long series of books rather than writing a long list of agda/lean/etc files.

In fact, it's quite easy for me to picture someone like Grothendieck writing their own theorem prover! Perhaps in that world EGA/SGA would look much more like the currently-under-development synthetic algebraic geometry, formalized in a proof assistant that's custom made for arguments in the (big) zariski or etale topos.

@andrejbauer @dougmerritt @MartinEscardo @JacquesC2 @pigworker @xenaproject

@hallasurvivor

It is not clear who specifically you are replying to. I hope it is not me.

@johncarlosbaez @andrejbauer @dougmerritt @JacquesC2 @pigworker @xenaproject

@MartinEscardo

Sorry, let me adopt your convention to make my reply more clear -- but you can tell I was replying to John since it's his post that show up above mine when you click my post (at least on my instance)

@johncarlosbaez @andrejbauer @dougmerritt @JacquesC2 @pigworker @xenaproject

@hallasurvivor - I don't think Grothendieck broke out of existing formalisms in quite the way I'm talking about. Though it broke radically new ground, all his work was rigorized very quickly. The people I mentioned did work that either took a century or more to make rigorous, or is still in process of being brought into the fold of rigorous mathematics. I'm talking about Newton's calculus, Euler's manipulations of divergent series to compute the zeta function, Feynman's path integrals, and the many path integral "proofs" given by Witten.

There are dozens of less famous but still interesting examples. I would never have written a paper about the Cobordism Hypothesis (and the still less finished Tangle Hypothesis and Generalized Tangle Hypothesis) if I had been thinking about formal definitions or proofs.

(That's an example that comes from outside analysis! But my other examples suggest analysis is a lot more future-leaning than algebra, in the sense of racking up debts against future formalization.)

@johncarlosbaez
John, I'm a huge fan of nonstandard analysis. i get that not everyone is, but what I found interesting about the history of NSA is that it started out informal, was ridiculed, then an alternative formalism invented which I find... annoying, and then Abraham Robinson came along and he wanted NSA to be rigorous so he found a way to make it so, and then since then others found simpler ways and in some ways
@andrejbauer @dougmerritt @MartinEscardo @JacquesC2 @pigworker @xenaproject

@johncarlosbaez
the hyperreals are so "easy" to formalize its hard to imagine how long we went without a formalism.

What's also interesting is how strong the pushback is about NSA. Multiple times a week on the internet calculus students will ask questions about dy/dx being a fraction or not... mention nonstandard analysis and youre likely to get down voted and attacked as unhelpful... happens on r/learnmath regularly.

@andrejbauer @dougmerritt @MartinEscardo @JacquesC2 @pigworker @xenaproject

@johncarlosbaez
So, while I think of formalism as a needed tool, I'm also with John in thinking that formalism is not always helpful. if formalism was enough we could replace all math books by the axioms of ZFC and be done with it... everything else left as an exercise for the reader 😅
@andrejbauer @dougmerritt @MartinEscardo @JacquesC2 @pigworker @xenaproject

@johncarlosbaez writes "But in fact I don't think formalizing mathematics at all makes us think the right way."

You should try it once to see by yourself. You are talking about something you've never done.

I say this because I also said the same thing as you said in the past.

And here is the first time I repent in public:

Computing an integer using a Grothendieck topos
https://math.andrej.com/2021/05/18/computing-an-integer-using-a-sheaf-topos/

@andrejbauer @dougmerritt @JacquesC2 @pigworker @xenaproject

Mathematics and Computation | Computing an integer using a Grothendieck topos

@MartinEscardo - how long would it take to "try it once"? Then I can weigh that against other things I could do with that time.

I have absolutely nothing I want to formalize, and I'd be weighing time spent on this against things like learning how to make music on a digital audio workstation, which I put off because it doesn't provide the instant gratification of playing a piano, but which I know I'd like in the end. In either case I know I'll start by suffering for hours, learning someone else's syntax and conventions instead of thinking my own thoughts. But for music software I know I'll eventually get a product I enjoy that I couldn't make any other eay.

@johncarlosbaez @MartinEscardo It took me weeks to finish my first non-trivial proof (preservation in an STLC). I'd think there's stuff that can be knocked out in a few hours that still retain the flavor, but no example leaps to mind.

Have a look at these Lean games:

https://adam.math.hhu.de/

It's good to work a few examples to see how they spell out in a proof assistant. It's not difficult! It's fun!

I see proof assistants as a sort of calculator for assumptions. They really shine when you throw lots of stuff at them, like verifying computer programs. Which involves lots of boring stuff, and a proof assistant can help with that! If it's a burden then don't use a proof assistant.

@johncarlosbaez @MartinEscardo

Lean Game Server

@johncarlosbaez asked:
> how long would it take?

I just noticed I didn't answer your question. An hour is fine. More is more.

@MartinEscardo

@johncarlosbaez It is absolutely fine not wanting to try it.

But then please stop saying things such as "I don't think formalizing mathematics at all makes us think the right way" or other claims about formalization.

If you want to speak with authority about formalization, then learn it first. And, yes, it is not easy or fast to learn it.

Having spent several years in the trenches of formalized mathematics by now, I'm actually more sympathetic @johncarlosbaez 's line of thinking than I used to be, but I think there's nothing about formalized mathematics *per se* that forces this to be the case.

The way I've come to use proof assistants/etc. over the years actually, counterintuitively, ends up making math more "empirical," in a way. My informal proofs and ideas become "hypotheses" I can "test" by attempting to find a formalism and suitable abstractions that make them checkable by a computer. And like any good scientific experiment, this quickly becomes an iterative process—take some informal ideas, attempt to formalize them, get some data back about what ends up being difficult, refine the ideas, repeat until a satisfactory equilibrium is found. And this process itself can lead to a lot of "a ha" moments and "radical" new ideas, itself.

As already noted, however, this process carries the risk of railroading one's thoughts into those ways of thinking that are more easily formalized in a particular system. But imo this is a failure of that particular system to be sufficiently syntactically/semantically flexible, and not of formalism/interactive theorem proving in general.

The future I hope for, and which I am actively building toward, is one in which we have general systems for defining, simulating, and verifiably translating between different logical/formal systems, so that if someone has a new mathematical idea they want to try out it's easy to get up and running with a system for testing it and relating it to other frameworks.

@andrejbauer @dougmerritt @MartinEscardo @JacquesC2 @pigworker @xenaproject

@cbaberle said "And this process itself can lead to a lot of "a ha" moments and "radical" new ideas, itself."

I have found the same thing. When I am working on things that are not directly about formalized mathematics, but with using a proof assistant as a blackboard (echoing Martin's wonderful phrasing), I feel that I am much freer to make wild conjectures, because I can disprove them equally quickly.

The numbers of "models" of quantum programming based on traced monoidal categories (that did not in fact work) is staggering. The failures were usually quite subtle. My co-author(s) and I had convinced ourselves via 'paper math' that they worked, for each and every one of them.

@johncarlosbaez @andrejbauer @dougmerritt @MartinEscardo @pigworker @xenaproject

@JacquesC2 @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @MartinEscardo @pigworker @xenaproject I feel like there is a bit of a selection bias here. Would you say that formalization is as useful as a blackboard for eg number theory or geometric measure theory, as it is for type-theory/logic/computer science?
There is a huge UX problem, due to the fact that most mathematical research is done with objects and methods that fit badly in the current text-based, heavily formal (I don't know how to say this better) proof assistants. So my impression is that for most mathematics working formalization-first would be as painful and counterproductive as it would be for a PL/type theorist to work only on a whiteboard.
Hopefully this will change in the future! I do believe that formalization is very useful for mathematics.

@mc These areas might be more painful, but they contain mistakes. I found a few measure theoretic mistakes in some recent papers / theses over the last year or two.

@JacquesC2 @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @MartinEscardo @pigworker @xenaproject

@ohad @JacquesC2 @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @MartinEscardo @pigworker @xenaproject yeah but that's not my point. is formal-first, agda-as-a-blackoboard, a good fit for those disciplines with the current tools? one can definitely argue for formalization of the results *at some point* as a means to check their correctness (I'm a bit skeptical of that too, but it's not a hill I want to die on)

@mc
As a rule of thumb, I'd say if a domain has had 5+ formalizations already, then yes, formal-first works. Two or less, likely not.

My 5 might be too low.

@ohad @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @MartinEscardo @pigworker @xenaproject

@JacquesC2 I haven't got to the point where I can use a tool like Agda the way I use a whiteboard for my own research the way Martín uses it.

I have used it as a whiteboard to explain concepts to other people, but that's probably because we insist on clipping our students' wings by not teaching them the relevant mathematics.

I say 'students' here with a very broad interpretation, we are all students indefinitely.

@mc @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @MartinEscardo @pigworker @xenaproject

@JacquesC2 that said, I typically don't use tools like Agda to check my maths anyway. Rather, I do some maths (traditionally, on the whiteboard, then on paper, then in LaTeX) in order to work out how to explain enough of it to Agda or Idris so it will let me write a program I cannot write in other kinds of languages.

@mc @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @MartinEscardo @pigworker @xenaproject

@ohad writes "we are all students indefinitely."

+∞

I certainly am. Of mathematics and everything.

@JacquesC2 @mc @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @pigworker @xenaproject

@mc

You don't need to formalize all prior art to be able to do what you want in blackboard mode.

For example, @de_Jong_Tom and I used a result by Mike Shulman, which was published and also formalized in Rocq/HoTT, in TypeTopology.

We didn't want to formalize it (although I have been entertaining the possibility of formalizing it, to understand it better, in my own terms - formalizing for the sake of thinking about it).

So we just used it as a hypothesis to our desired result. That is, we proved "Shulman's result implies our result".

This doesn't establish the ultimate truth of our result in TypeTopology.

But it does allow us to keep using TypeTopology as a blackboard, without worrying about formalizing everything that every mathematician has done in this planet in order to carry on.

And without any lack of honesty. All assumptions are explicitly given. The adopted `--safe` flag guarantees that. There are no postulates. Just explicit assumptions.

@ohad @JacquesC2 @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @pigworker @xenaproject

@MartinEscardo @de_Jong_Tom @ohad @JacquesC2 @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @pigworker @xenaproject that's a cool trick! one still needs a whole lot of definitions though, and might not have the skills or patience to build those... obviously this is going to change as the volume of existing formalization grows. there is going to be a critical value at which point people can just spin up an ITP and start formalizing with the same ease they can write a preliminary section.
Loogle - Search Lean and Mathlib

Loogle is a search tool for finding definitions, theorems, and lemmas in Lean 4 and Mathlib.

@andrejbauer Indeed… This is sad. (Yesterday, a colleague used this to find the proof that compact spaces have a uniform structure, simply by entering "CompactSpace _ -> UniformSpace _")
@mc @MartinEscardo @de_Jong_Tom @ohad @JacquesC2 @cbaberle @johncarlosbaez @dougmerritt @pigworker @xenaproject
@andrejbauer Maybe there are two many functions that have this pattern; "_ + 1 < _" already gives 92 functions!
@mc @MartinEscardo @de_Jong_Tom @ohad @JacquesC2 @cbaberle @johncarlosbaez @dougmerritt @pigworker @xenaproject
@antoinechambertloir @andrejbauer For the record, `n + 1` is the pattern for the successor case for pattern-matching on `Nat`, so this is equivalent to searching for `succ n =` in some Agda library (I would search TypeTopology as an example, however I am locked out of github since 2fa is a nuisiance)
@mc @MartinEscardo @de_Jong_Tom @ohad @JacquesC2 @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @pigworker @xenaproject I've recently learned about https://theoremsearch.com/ which is more about informal math, but still might be useful
TheoremSearch

Semantic Search over 9 Million Mathematical Theorems

TheoremSearch

@JacquesC2

It’s the search functionality that is supported for md books, so it is essentially just searching for the string typed in the search bar. In combination with the natural language exposition for each concept and each block of code, this works reasonably well. Search for a concept, and if it is formalized it will show you its page which also contains its basic properties and often at the bottom a see-also section with links to related pages.

We’ve put a lot of effort in the online navigability of our library!

@mc @MartinEscardo @de_Jong_Tom @ohad @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @pigworker @xenaproject

@mc writes "that's a cool trick! "

It's not a trick. It is what we do all the time when we use somebody else's theorems in our own papers.

@de_Jong_Tom @ohad @JacquesC2 @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @pigworker @xenaproject

@mc Did not think I would ever see stating and using someone else's result called "a trick"! It's extremely common in math literature.
@oantolin What's not common at all is to state your results as conditional on a proven result. You usually just invoke it in the proof.
@mc @oantolin I think this is still common.
@mc Oh, I see. That's specifically what you meant is a trick? OK. Is this strategy of including as a hypothesis the result you don't want to prove forced by Agda or can you tell Agda to assume something without proof (which I guess would be more closely analogous to stating and then using a result)?
@oantolin Honestly I don't know precisely what difference does it make from a technical viewpoint. Unless the proof 'computes', in which case I assume the compiler would just get stuck at an invocation.
@mc @oantolin for example agda-unimath globally assumes univalence and one is free to use the term “eq-equiv” at any point in their proofs. While in typetopology univalence is always explicitly assume in a theorem type (or possibly in a module if it is used often in a file).
@ToucanIan @mc @oantolin For whatever it's worth, I kind of view this univalence discipine in the same way I think of an algebraic geometry book assuming things are locally Noetharian: I don't actually mind making this assumption even if it's not *strictly* necessary, but I do like it when it's not a standing assumption when I'm reading a book. It's then a clue for how the proof goes. (I think I learned of this viewpoint from Vakil)

@mc writes "Honestly I don't know precisely what difference does it make from a technical viewpoint."

A big difference is that postulates are not allowed under the flag `--safe`, because postulates may in principle make the system inconsistent.

On the other hand, hypotheses are always safe, although they may be vacuous assumptions, of course. But they don't make everything inconsistent if they are vacuous.

@oantolin

@mc @oantolin A more technical answer: postulates are primitive constants adjoint to the theory, whereas assumptions are bound variables that are abstracted outside of their scope.

@mc

This issue of fit (which is indeed related to linguistics) is why CASes have a huge leg up on ITPs regarding UX. But they also have their fit problems, never mind their correctness problems.

Mathematica's UI coupled with a nice ITP would be nice. That would need a complete rewrite of both halves to be viable. But that would be a vast improvement.

Then we'd be in a good position, as a community, to actually figure out the "next generation" system.

@cbaberle @johncarlosbaez @andrejbauer @dougmerritt @MartinEscardo @pigworker @xenaproject

@mc I definitely agree with this, and I have a bunch of thoughts about fixing it. Some programmatic remarks:

- Interactive theorem proving and text-based representations are already uneasy bedfellows. Let's embrace the "interactive" part and move to true structure editing where proof data is stored as abstract syntax trees and the language provides an open protocol for interfacing with such ASTs, on top of which one can implement whatever UI one pleases. Text-based editing becomes just one such "view" of the underlying proof data.
- A lot of "ordinary" math comes down to: draw a structured graph (in *both* senses of the word "graph") of some sort, and make some conclusion based on its structure. By now there's plenty of existing work on compiling things like commutative diagrams/string diagrams/etc. to syntax trees, etc., and we can and *should* make use of this work to provide more convenient interfaces to ITPs, via protocols as above.
- Interactive theorem proving deserves interactive documentation. We should have an analogue of Jupyter/Mathematica Notebooks for ITPs where different editor UIs can be mixed and matched, data can be displayed in a variety of formats, etc.

@JacquesC2 @johncarlosbaez @andrejbauer @dougmerritt @MartinEscardo @pigworker @xenaproject

@johncarlosbaez @dougmerritt @MartinEscardo @JacquesC2 @pigworker @xenaproject Well, it should be obvious, but let me say it anyway.

We need mathematicians of every kind: the thinkers, the dreamers, the formalizers, and even physicists.

@andrejbauer - I agree: I want a diversity of approaches. Quotes like "It forces you to think about mathematics in the right way" are what scare me - together with the money that's getting poured into Lean right now, which could fool a young mathematician into thinking this is "the" right way to do math.

I have no fear that *you* will be pressured into doing anything you don't want to, just because it's the hot new trend.

@johncarlosbaez @andrejbauer You don't have to think in the formally correct way, but it helps to know what the formally correct way is.
@sjb I think the point that both @johncarlosbaez and @andrejbauer are making is that there is no single correct to way to think, whether you are working formally or traditionally.
@andrejbauer @johncarlosbaez @dougmerritt @MartinEscardo @JacquesC2 @pigworker @xenaproject And as well as those working at the cutting edge, we want mathematicians working on more mundane matters. Playing with simple (especially by cutting edge standards) probability and statistics matters, I see and use some things that didn't exist when I was an undergraduate. For example, estimating the confidence interval of a binomial probability, methods have got better.