👉 From COBOL to Kotlin // Marco Graziano
#cobol #kotlin #formalmethods
https://marcoeg.medium.com/from-cobol-to-kotlin-795920b1f371
👉 From COBOL to Kotlin // Marco Graziano
#cobol #kotlin #formalmethods
https://marcoeg.medium.com/from-cobol-to-kotlin-795920b1f371
Building a React App with Formally Verified State
https://fed.brid.gy/r/https://midspiral.com/blog/building-a-react-app-with-formally-verified-state/
Designing Predictable LLM-Verifier Systems for Formal Method Guarantee
https://arxiv.org/abs/2512.02080
#HackerNews #Designing #Predictable #LLM-Verifier #Systems #for #Formal #Method #Guarantee #LLMVerifier #FormalMethods #AIResearch #Predictability #arXiv

The integration of Formal Verification tools with Large Language Models (LLMs) offers a path to scale software verification beyond manual workflows. However, current methods remain unreliable: without a solid theoretical footing, the refinement process acts as a black box that may oscillate, loop, or diverge. This work bridges this critical gap by developing an LLM-Verifier Convergence Theorem, providing the first formal framework with provable guarantees for termination in multi-stage verification pipelines. We model the interaction not as a generic loop, but as a sequential absorbing Markov Chain comprising four essential engineering stages: \texttt{CodeGen}, \texttt{Compilation}, \texttt{InvariantSynth}, and \texttt{SMTSolving}. We prove that for any non-zero stage success probability ($δ> 0$), the system reaches the \texttt{Verified} state almost surely. Furthermore, because of the sequential nature of the pipeline, we derive a precise latency bound of $\mathbb{E}[n] \leq 4/δ$. We stress-tested this prediction in an extensive empirical campaign comprising over 90,000 trials. The results match the theory with striking consistency: every run reached verification, and the empirical convergence factor clustered tightly around $C_f\approx 1.0$, confirming that the $4/δ$ bound accurately mirrors system behavior rather than serving as a loose buffer. Based on this data, we identify three distinct operating zones -- marginal, practical, and high-performance -- and propose a dynamic calibration strategy to handle parameter drift in real-world environments. Together, these contributions replace heuristic guesswork with a rigorous architectural foundation, enabling predictable resource planning and performance budgeting for safety-critical software.
From Intent to Proof: Dafny Verification for Web Apps