Regarding Leo’s post, it was in poor taste but no poorer than anyone who knows him has come to expect. My comment is that I bet Lean is inconsistent too. I don’t know exactly where and how, all I am saying is that I don’t doubt it. These are very sophisticated pieces of software.

In fact, I don’t doubt that those working in the trenches of the FRO could point to some dark corners and examples of unpublicised sources of inconsistency.

This isn’t to dunk on Lean. Bugs can happen, and having a small kernel is a good way to limit their scope! But there are many ways for assurances to fail and this can happen inside and outside the kernel.

I am also not too impressed by the existence of second-party checkers in the context of a language that lacks decidable type checking. Because something has to give, these checkers will be (necessarily) incomplete in one way or another, most likely each in different ways. This is because lean’s failure of completeness of conversion almost certainly manifests in heuristic ways that are not predictable or replicable…

For a fail safe checker to provide confidence, it needs to be complete. This is not possible with Lean’s core language as designed, and it will not be possible for Lean itself until they find new leadership.

@jonmsterling Afaiu, the situation is even weirder: all the independent checkers essentially reimplement the exact same algorithm, with the exact same incompleteness, essentially by following the original code. At least that was the case for Lean4Lean. So it's independent only in a rather weak sense of the term.
@mevenlennonbertrand @jonmsterling I believe they all also assume unique typing for Lean's type theory, but the proof of this property in Mario Carneiro's Master's thesis is incorrect and it remains open. For true diversity in type checker implementation, it would be nice to have a checker that doesn't itself rely on unproven conjectures about the type theory.
@zwarich @jonmsterling Claim: this is extremely hard. My main takeaway from verifying kernels is that every single thing you do implicitly relies on invariants, which to be established need difficult meta-theoretic properties. At the very least injectivity of type constructors, which is wildly open for a type theory like Lean's. This is baked in everywhere, from the fact that your evaluator does not need to re-infer and compare the type of sub-expressions every time it does a reduction step (for an example of what happens if you want to do this safely, you can look at Anja Petković Komel's PhD thesis), to the fact that you don't re-check the context every time you reach a leaf in a derivation tree. I don't think you could re-check mathlib with a type-checker of this kind, at least not without serious performance engineering, which would complicate your kernel and kind of defeat the point.
@mevenlennonbertrand @zwarich It would be really good to clarify in public just how badly EVERYTHING breaks if you don't have these basic syntactic properties. It's not just some pointy-headed thing... it's one of the important pre-requisites for getting to the level of assurances that Leo is recklessly claiming about Lean.
@jonmsterling @zwarich If I had a dollar every time someone does something that is only safe if type constructors are injective… But yeah I don't know what to do to make this more visible at this point.
@mevenlennonbertrand @zwarich It's so frustrating. I feel we are chafing under the pre-scientific nature of type theory at this time... It remains, even after the revolutions of the past decade, really difficult to speak about these things in generally applicable terms that aren't specific to a particular system.
@jonmsterling @mevenlennonbertrand @zwarich I want to plug SOGATs here, and in particular @rafaelbocquet's work on synthetic approaches to metatheorems with SOGATs https://rafaelbocquet.gitlab.io/pdfs/thesis.pdf
@olynch @mevenlennonbertrand @zwarich @rafaelbocquet I mean, I know that stuff well and have made many contributions to it over the years and my entire PhD thesis was about scaling those metatheorems, but my point is that IN SPITE of these tools we still can’t really talk about these syntactic problems in a clear and general purpose way because semantic properties tend to splinter into a zillion interrelated properties of syntax-in-the-computer and the algorithms surrounding it. The whole discussion of “subject reduction” is a good example of how something non-semantic can actually be related in complex ways to semantic things (including both injectivity of type constructors and decidability).