I've been thinking about something that's related to Tarski's undefinability theorem (and might possibly be a corollary if viewed from the right angle), but relates to computability rather than truth. It seems to have implications for systems like Agda and Rocq, which I don't have much experience with, but which I think will (at least with certain settings) only let you define computable things. I'd love to know if this is well-known already.
Suppose you have a system, S, that lets you encode (as natural numbers) definitions of computable functions from Nat to Bool. Not every natural number has to represent such a function, but given any natural number, n, you have to be able to tell whether n is the encoding of such a function, and if it is, you have to be able to evaluate it for any natural number you want.
Now we can define a function f that takes a natural number n as input, and returns a boolean value which is True if and only if n is the encoding (in S) of a computable function g from Nat to Bool, and g(n) = False.
By the constraints I described for S, this must be a computable function. So can it be encoded in S?
No! If it could be encoded as a natural number k, then f(k) would be True if and only if f(k) = False.
So Agda, for example, with settings that allow only definitions of computable things, can't allow definitions of all computable functions. In particular, it seems it can't implement a complete simulation of Agda with the same settings.
This is almost the opposite of the problem with the halting problem. A universal Turing machine can simulate a universal Turing machine, but it can't always tell if it will halt if given certain input. Agda can guarantee that any algorithm you write in it will halt, but, if my reasoning above is correct, it can't simulate itself.
So what other computable functions might not be definable in Agda? What obstacles would you find, for example, if you tried to simulate Agda in Agda?
#Agda #Rocq #computability #logic