Propositions As Types Analogy • 1
https://inquiryintoinquiry.com/2013/01/29/propositions-as-types-analogy-1/

One of my favorite mathematical tricks — it almost seems too tricky to be true — is the Propositions As Types Analogy. And I see hints the 2‑part analogy can be extended to a 3‑part analogy, as follows.

Proof Hint ∶ Proof ∶ Proposition

Untyped Term ∶ Typed Term ∶ Type

or

Proof Hint ∶ Untyped Term

Proof ∶ Typed Term

Proposition ∶ Type

See my working notes on the Propositions As Types Analogy —
https://oeis.org/wiki/Propositions_As_Types_Analogy

#Mathematics #CategoryTheory #ProofTheory #TypeTheory
#Logic #Analogy #Isomorphism #PropositionalCalculus
#CombinatorCalculus #CombinatoryLogic #LambdaCalculus
#Peirce #LogicalGraphs #GraphTheory #RelationTheory

I was recently reading Turing's essay "Intelligent Machinery" (<https://archive.org/details/turing1948>, <https://doi.org/10.1093/oso/9780198250791.003.0016>), and Turing says something very interesting:

"Recently the theorem of Gödel and related results (Gödel, Church, Turing) have shown that if one tries to use machines for such purposes as determining the truth or falsity of mathematical theorems *and one is not willing to tolerate an occasional wrong result*, then any given machine will in some cases be unable to give an answer at all."

the emphasis is mine. I didn't know about that clause, "and one is not willing to tolerate an occasional wrong result". Can any mathematician or logician here tell me where I can find more technical details about this and what it's meant by Turing? Thank you!

#mathematics #logic #prooftheory

Intelligent Machinery : Free Download, Borrow, and Streaming : Internet Archive

idiotic conway game society metropolitan opera/what`s on eno

Internet Archive

Formal verification — (machines constructing and checking mathematical proofs) — keeps getting better. Cobblestone (@TaliaRinger https://arxiv.org/abs/2410.19940
) and other divide-and-conquer approaches now automate proofs we once thought unreachable.

But something feels different about where this is heading.

As we push automation forward, we’re starting to touch the edges of what a “proof” actually is — not just a verified computation, but a statement about what can’t be opposed.

I’m not sure the field has fully realized how close we’re getting to an ontological shift in what counts as proof itself.

#ProofTheory #FormalVerification #AI #Logic #SciComm

Cobblestone: A Divide-and-Conquer Approach for Automating Formal Verification

Formal verification using proof assistants, such as Coq, is an effective way of improving software quality, but requires significant effort and expertise. Machine learning can automatically synthesize proofs, but such tools are able to prove only a fraction of desired software properties. We introduce Cobblestone, a divide-and-conquer approach for proof synthesis. Cobblestone uses a large language model (LLM) to generate potential proofs, uses those proofs to break the problem into simpler parts, automatically identifies which of those parts were successfully proven, and iterates on the remaining parts to build a correct proof that is guaranteed to be sound, despite the reliance on unsound LLMs. We evaluate Cobblestone on four benchmarks of open-source Coq projects, controlling for training data leakage. Fully automatically, Cobblestone outperforms state-of-the-art non-LLM tools, and proves many theorems that other LLM-based tools cannot, and on many benchmarks, outperforms them. Each Cobblestone run costs only $1.25 and takes 14.7 minutes, on average. Cobblestone can also be used with external input, from a user or another tool, providing a proof structure or relevant lemmas. Evaluated with such an oracle, Cobblestone proves up to 58% of theorems. Overall, our research shows that tools can make use of partial progress and external input to more effectively automate formal verification.

arXiv.org

Hi everyone — I’m Carlos Tomas Grahm, an independent mathematician with a background in continuum mechanics and mathematical logic.

I started in modeling under an NSF-funded Texas A&M grant, developing what’s still the most accurate carotid-artery model in the literature.

These days I’m exploring how the structure of definitions shapes proofs — from ordered vs. non-ordered reasoning to broader questions in complexity theory.

I’m here to share occasional notes (and probably too many thoughts) on proof structure, modeling, and the weirdly human process of finding rigor.

Looking forward to meeting others who love the math side of things — whether it’s theory, teaching, or applied modeling.

#Mathematics #Logic #Modeling #Complexity #ProofTheory #Mathstodon

hcommons.social: -

Tomorrow, I get to give the last of my three talks on inferentialism. It’s time to buckle up your λs, and join in the search for some unicorns…https://consequently.org/presentation/2025/whl-a/#prooftheory #semantics #linguistics

https://hcommons.social/@consequently/115333834868446330

Greg Restall (@[email protected])

Attached: 1 image Tomorrow, I get to give the last of my three talks on inferentialism. It’s time to buckle up your λs, and join in the search for some unicorns… https://consequently.org/presentation/2025/whl-a/ #prooftheory #semantics #linguistics

hcommons.social

Tomorrow, I get to give the last of my three talks on inferentialism. It’s time to buckle up your λs, and join in the search for some unicorns…

https://consequently.org/presentation/2025/whl-a/

#prooftheory #semantics #linguistics

My very first package on CRAN! <https://cran.r-project.org/package=Pinference>
I hope it may be of use especially to teachers of the basics of probability and of symbolic logic and proof theory.

Uncountable thanks to all R people here who kindly helped with all my problems along the way. 🙏

#rstats #probability #logic #prooftheory

Pinference: Probability Inference for Propositional Logic

Implementation of T. Hailperin's procedure to calculate lower and upper bounds of the probability for a propositional-logic expression, given equality and inequality constraints on the probabilities for other expressions. Truth-valuation is included as a special case. Applications range from decision-making and probabilistic reasoning, to pedagogical for probability and logic courses. For more details see T. Hailperin (1965) <<a href="https://doi.org/10.1080%2F00029890.1965.11970533" target="_top">doi:10.1080/00029890.1965.11970533</a>>, T. Hailperin (1996) "Sentential Probability Logic" ISBN:0-934223-45-9, and package documentation. Requires the 'lpSolve' package.

Coming up this afternoon, I’m giving the talk “Inferentialism for Everyone” for the local Arché Metaphysics and Logic crew here in St Andrews.

This talk attempts to distill material I’ve been thinking about for the last decade or so down to a concentrated but accessible form. I look forward to discovering how successful the distillation efforts are…

https://consequently.org/presentation/2025/whl-a/

#logic #prooftheory #inferentialism #semantics

I've followed these odd reductions back to the original source, 'Ideas and Results in Proof Theory' by Prawitz (1971); see attached image. These rules are introduced alongside the more usual ones, but not really discussed later as far as I can tell, except implicitly in a section when he notes that not everyone would accept rules beyond beta reduction as capturing the notion of 'the same proof'. He asserts uniqueness of normalisation, which these rules clearly break. Despite this being a quite heavily cited paper (~1000 cites), no one seems to have explicitly noted there is anything odd here until a paper by Dyckhoff in 2014, as best as I can tell! #logic #proofTheory
Discussing “Extended Curry-Howard Correspondence for a Basic Constructive Modal Logic” – The Updated Scholar