A new piece of research as part of my group's safe and formally verified numerics project that I am really excited about -- The FLoPS framework, which formalizes the upcoming P3109 standard in Lean. Great work by my PhD students: Tung-Che Chang and Sehyeok Park and collaborator Jay Lim from University of California, Riverside.

The upcoming IEEE-P3109 standard for low-precision floating-point arithmetic can become the foundation of future machine learning hardware and software. Unlike the fixed types of IEEE-754, P3109 introduces a parametric framework defined by bitwidth, precision, signedness, and domain. This flexibility results in a vast combinatorial space of formats -- some with as little as one bit of precision -- alongside novel features such as stochastic rounding and saturation arithmetic.

We have formalized the P3109 standard and discovered new interesting properties of foundational algorithms such as Fast2Sum, Sternbenz, and a floating-point splitting ExtractScalar in the context of P3109.

During this process, we discovered some errors in the draft standard and reported it to the working group (I am member of the P3109 working group). They have been fixed.

See the Github repo of our mechanized proofs in Lean: https://github.com/rutgers-apl/flops

See our technical report:
https://arxiv.org/pdf/2602.15965

On a side note, I was convincingly persuaded to explore Lean by Ilya Sergey when I visited NUS in August 2024. Thanks Ilya for making a compelling case for using Lean while Umang Mathur and Abhik Roychoudhury made the visit possible.

#RutgersCS #RAPL #P3109