Readings shared January 24, 2026

The readings shared in Bluesky on 24 January 2026 are: Abel's limit theorem (in Isabelle/HOL). ~ Kangfeng Ye. #ITP #IsabelleHOL #Math A formalization of the downward Löwenheim-Skolem theorem in Coq.

Vestigium
Elementary proofs of ring commutativity theorems. ~ Michael Kinyon, Desmond MacHale. https://arxiv.org/abs/2601.12599v1 #ATP #Prover9 #Math
Elementary proofs of ring commutativity theorems

Jacobson's commutativity theorem says that a ring is commutative if, for each $x$, $x^n = x$ for some $n > 1$. Herstein's generalization says that the condition can be weakened to $x^n-x$ being central. In both theorems, $n$ may depend on $x$. In this paper, in certain cases where $n$ is a fixed constant, we find equational proofs of each theorem. For the odd exponent cases $n = 2k+1$ of Jacobson's theorem, our main tool is a lemma stating that for each $x$, $x^k$ is central. For Herstein's theorem, we consider the cases $n=4$ and $n=8$, obtaining proofs with the assistance of the automated theorem prover Prover9.

arXiv.org
Readings shared July 12, 2025

The readings shared in Bluesky on 12 July 2025 are A formalization of divided powers in Lean. ~ Antoine Chambert-Loir, María Inés de Frutos-Fernández. #ITP #LeanProver #Math Completeness of the decre

Vestigium
Marginal subsemigroups and commutators in inverse semigroups. ~ Gonçalo Araújo, João Araújo, Michael Kinyon https://link.springer.com/article/10.1007/s00233-025-10548-9 #ATP #Prover9 #Math
Marginal subsemigroups and commutators in inverse semigroups - Semigroup Forum

Marginal subgroups, introduced by P. Hall, are characteristic subgroups induced by group words. The goal of this paper is to extend the notion to inverse semigroups. Our first main result establishes that these marginal subsemigroups are full inverse subsemigroups. We then examine the special case in which the word is the commutator, showing that the induced marginal inverse subsemigroup coincides with the metacenter, which is a normal inverse subsemigroup. In the process we prove some results about commutators in inverse semigroups and in Clifford semigroups. The paper concludes with several open problems.

SpringerLink
Readings shared June 10, 2025

The readings shared in Bluesky on 10 June 2025 are Inductive definitions. ~ Lawrence Paulson. #ITP #IsabelleHOL #Math The equational theories project: advancing collaborative mathematical research at

Vestigium
Are LLMs reliable translators of logical reasoning across lexically diversified contexts? ~ Qingchuan Li et als. https://arxiv.org/abs/2506.04575v1 #LLMs #Math #ATP #Prover9
Are LLMs Reliable Translators of Logical Reasoning Across Lexically Diversified Contexts?

Neuro-symbolic approaches combining large language models (LLMs) with solvers excels in logical reasoning problems need long reasoning chains. In this paradigm, LLMs serve as translators, converting natural language reasoning problems into formal logic formulas. Then reliable symbolic solvers return correct solutions. Despite their success, we find that LLMs, as translators, struggle to handle lexical diversification, a common linguistic phenomenon, indicating that LLMs as logic translators are unreliable in real-world scenarios. Moreover, existing logical reasoning benchmarks lack lexical diversity, failing to challenge LLMs' ability to translate such text and thus obscuring this issue. In this work, we propose SCALe, a benchmark designed to address this significant gap through **logic-invariant lexical diversification**. By using LLMs to transform original benchmark datasets into lexically diversified but logically equivalent versions, we evaluate LLMs' ability to consistently map diverse expressions to uniform logical symbols on these new datasets. Experiments using SCALe further confirm that current LLMs exhibit deficiencies in this capability. Building directly on the deficiencies identified through our benchmark, we propose a new method, MenTaL, to address this limitation. This method guides LLMs to first construct a table unifying diverse expressions before performing translation. Applying MenTaL through in-context learning and supervised fine-tuning (SFT) significantly improves the performance of LLM translators on lexically diversified text. Our code is now available at https://github.com/wufeiwuwoshihua/LexicalDiver.

arXiv.org
Readings shared April 13, 2025

The readings shared in Bluesky on 13 April 2025 are Completeness of decreasing diagrams for the least uncountable cardinality (in Isabelle/HOL). ~ Ievgen Ivanov. #ITP #IsabelleHOL #Math #SetTheory Ef

Vestigium
Razonamiento automático (2011-12)

Readings shared February 24, 2025

The readings shared in Bluesky on 24 March 2025 are Formal verification of machine learning models in Lean. ~ Matéo H. Petel. #ITP #LeanProver #MachineLearning Why Lisp syntax works. ~ Fernando Borre

Vestigium
Razonamiento automático (2008-09)