RE: https://mathstodon.xyz/@markusde/116020715195099414

Oh boy, I've been *asked* to rant about constructivity and Lean's type theory, so here I come. Beware, long thread incoming (I tried to keep it reasonable, I have definitely failed).

Let me start by some clarification of what myself and other type theorists care about. I would say that the most contential thing for us about the way Lean sets things up is *not* a constructivity issue, but a completeness issue. For reasons I'll develop below, Lean decided to implement an undecidable type theory. In practice, this means that the kernel is incomplete, ie it can sometimes fail to detect things that should be definitionally equal or well-typed wrt. the "official" theory. We believe this is an issue because it implies that various operations, which are fine in the official theory are not in practice: for instance, you can have a = b = c (definitionally) both checked fine by the kernel, but it might fail to check that a = c, not because it does not hold in theory, but because of incompleteness. Similarly, we might have a well-typed term t that evaluates to some t', but the kernel fails to check that t' is well-typed, again because of incompleteness.

1/7

Now, regarding constructivity. There are many ways to define what a constructive system is, but a good rule of thumb in type theory is given by the *canonicity* property. This property is meant to capture the fact that "things compute fine". More technically, what it says is that closed objects (ie those that contain no free variables) of certain types can be evaluated to results of a reasonable form. For instance, canonicity for ℕ says that a closed `n : ℕ` can be evaluated to yield something of the form `suc (suc … 0)`. More interestingly, canonicity for Σ types says that from a closed `p : Σ x : A, B x` we can extract an `a : α` such that `β a` holds. Constructivists call this the *witness property*: from a proof of "existential" we can extract a witness of the existence.

The nice thing about Lean (and, more generally CIC), is that it provides the special universe `Prop`, which is a way to negotiate with constructivity, by letting us separate things where we are ready to maybe compromise on it. Concretely, the idea is that while we expect `Σ x : A, B x` to have a witness property, we might be very fine with `∃ x : A, P x` not having one, and the type theory is designed to enforce that separation and make it so that we can retain canonicity for `Σ` of `ℕ` while breaking it in horrible ways for `∃`. In a sense, propositional irrelevance, the fact that any two proofs are equal (definitionally!) embodies the fact that proofs carry no information beyond truth, and thus we should not hope to extract anything out of them.

2/7

All of this to say that the critical critical critical point for constructivity is "what do we allow to escape of `Prop`"? In other words, under which conditions can we build a function `P → A`, when `P : Prop` and `A : Type`. And it so happens that because this is a tricky topic, also the main issues with decidability in Lean come from that too. You might have heard of the singleton/subsingleton criterion. This is the main piece of the type theory in charge of controlling what escapes from `Prop`. The general form of that criterion is a bit of a mouthful and not very informative, so I'll rather concentrate on the main examples.

For a quick summary: `False`, easy; equality, fine if you are careful; `choice`, bad, but you asked for it; termination, tricky tricky.

3/7

Now let's be more precise. First comes the easy one: `False`. From a contradiction, you can derive anything *including relevant content*. This is very useful, typically to prune away unreachable branches of computation. Since we hope the theory to be consistent, there should not be a proof of `False` in the empty context, so this will never impact on canonicity. Much better: we can postulate any set of axioms in `Prop`, as long as they are consistent (ie don't allow to prove `False`), canonicity still holds, for free. In particular, you can postulate excluded middle (in its propositional form `(P : Prop) → P \/ ¬ P`), or even crazy choice axioms, eg `∀ x : A, ∃ y : B, R x y → ∃ (f : A -> B), ∀ x, R x (f x)`, as long as they ultimately land in `Prop` you're good to go.

4/7

Next, equality. Equality lets you get back from `Prop` to `Type` because if you prove `A = B` you can build a function `A → B` (called `cast` in Lean). Now, this is a slightly more contentious topic, because 10 years ago when Lean started playing with proof irrelevance + equality it was not entirely clear what the status of this was, especially since Lean also adds quite some things on equality: extensionality axioms (propositional and functional extensionality), and quotients. But we made progress since, and the work of Loïc Pujet basically showed that we can do all this in a perfectly orthodox way, preserving decidability, constructivity, and most things one can ask for a type theory. Orthodoxy is restored, everything is fine, no pointy-headed constructivists complaining any more!

5/7

Now for some bad news: choice. In Lean's stdlib, choice says `∀ α, Nonempty α → α`. Here `α` is any `Type` and `Nonempty` is the way we turn a `Type` into a proposition, essentially identifying all its elements and destructing all computational content. I'll just quote Theorem Proving in Lean here: "`choice h` magically produces an element of `α`. Of course, this blocks any meaningful computation: by the interpretation of `Prop`, `h` contains no information at all as to how to find such an element." This completely kills the careful separation between the computational layer and the non-computational one, now anything goes, game over. Lean tries to claw back some of that distinction by marking anything that uses `choice` as `noncomputable`, which is in a sense a way to recover that distinction, but isn't it a bit sad? We had a perfectly good thing to make that distinction already, called `Nonempty`!

6/7

The final black belt subtlety, just for fun: accessibility. One last very useful way to use a proposition is to define a (computationally meaningful) function by providing a propositional argument that it terminates. If you do this and want to keep canonicity, you need to allow such functions to compute. But also you can't really look at how termination was proven if you want to uphold proof irrelevance. That implies that you kind of always have to always compute. This is ok for canonicity (more computation is always good!), and is mostly ok in a closed context (with the same sort of argument as for `False`, if you can prove something terminates without any assumption you can't lie and it must really be terminating, so evaluating it is fine). But in a non-empty context you can lie, provide a bogus proof of termination, and generate a looping function. In Lean this is avoided by providing a hard limit on the number of evaluation steps such a function is allowed to make. But this is a source of incompleteness, and of all the issues I mentioned at the very beginning. There's a number of subtle ways you can navigate this, I'll let you read https://hal.science/hal-05474391v1/ for a very detailed exploration of what we can do.

7/7

Definitional Proof Irrelevance Made Accessible

A universe of propositions equipped with definitional proof irrelevance constitutes a convenient medium to express properties and proofs in type-theoretic proof assistants such as Lean, Rocq, and Agda. However, allowing accessibility predicates---used to establish semantic termination arguments---to inhabit such a universe yields undecidable typechecking, hampering the predictability and foundational bases of a proof assistant. To effectively reconcile definitional proof irrelevance and accessibility predicates with both theoretical foundations and practicality in mind, we describe a type theory that extends the Calculus of Inductive Constructions featuring observational equality in a universe of strict propositions, and two variants for handling the elimination principle of accessibility predicates: one variant safeguards decidability by sticking to propositional unfolding, and the other variant favors flexibility with definitional unfolding, at the expense of a potentially diverging typechecking procedure. Crucially, the metatheory of this dual approach establishes that any proof term constructed in the definitional variant of the theory can be soundly embedded into the propositional variant, while preserving the decidability of the latter. Moreover, we prove the two variants to be consistent and to satisfy forms of canonicity, ensuring that programs can indeed be properly evaluated. We present an implementation in Rocq and compare it with existing approaches. Overall, this work introduces an effective technique that informs the design of proof assistants with strict propositions, enabling local computation with accessibility predicates without compromising the ambient type theory.

That's all for me folks! (And thanks @markusde for destroying a morning of productivity 😬)

@mevenlennonbertrand Great thread! I agree with all of it but want to especially re-emphasize:

Working classically is totally fine! There are some obvious pros and cons, but I think it's really neat to finally have a serious type-theoretic proof assistant with a serious math library that "looks like normal math" in a way that Rocq/Agda developments generally don't.

The incompleteness issues are troubling because they are at the kernel level, and say essentially that it does NOT implement the on-paper core theory. And it's a little frustrating too, because these are subtle issues that type theorists have already thought very hard about how to get right.