floating point representation
Also https://float.exposed by Bartosz Ciechanowski is a great interactive website for poking at floating point numbers
Float Exposed

Floating point format explorer – binary representations of common floating point formats.

@b0rk Are you making a zine about floats as well?
@rtn the plan is it’s about “how computers represent things in binary”, mostly integers and floats and strings. Probably mostly it’ll be about numbers and not so much about strings though
@b0rk ciechsnowskis' mechanical watch and internal combustion engine are also amazing... https://ciechanow.ski/mechanical-watch/
Mechanical Watch – Bartosz Ciechanowski

Interactive article explaining how a mechanical watch works.

Why does 0.1 + 0.2 = 0.30000000000000004?

Why does 0.1 + 0.2 = 0.30000000000000004?

Julia Evans

@sergueil37 @b0rk
> I kind of doubt that anyone had the patience to follow all of that arithmetic

I followed it all. Thanks for the explanations! #thingsivewonderedaboutformyentirelife

@b0rk I think that it would be nice if it was clearer about how a bias is used for representing negative numbers in the exponent.
@mithicspirit yeah that's on the next page (which I haven't posted yet), along with some more information :)
@b0rk I’m kinda proud to have finished a CS degree without understanding anything about this. Floating points are so weird!
@b0rk That's mighty cool!
Posit: A Potential Replacement for IEEE 754

Motivation Number systems and computer arithmetics are essential for designing efficient hardware and software architecture. In particular, real-valued computation constitutes a crucial component i…

SIGARCH
@nathanstocks AFAIK nobody (or almost nobody) uses posits, I only write about things that are very commonly used :)
@b0rk Well, dang. Posits are super awesome, and we're only not using them due to inertia. 😢
@nathanstocks that could be true! it's just not what I do here -- I explain how computers work today, not theorize about potential futures :)
@nathanstocks @b0rk that is a pretty strong exaggeration. We don’t use posits partially because of inertia, but also because they aren’t really any better than IEEE floats, and they’re also worse in some important ways.
@steve @b0rk Oh! I didn't know that. I should read up some more, then. All that I've read to date was glowing recommendations for it.
@nathanstocks @steve there's a subthread here talking about some criticisms (if you scroll down) https://social.jvns.ca/@kelpana@mastodon.ie/109835299704305047
Alanna (@[email protected])

@[email protected] The "Posits": https://en.wikipedia.org/wiki/Unum_(number_format)

mastodon.ie
@b0rk @steve Cool, thank you! Such a recent thread, too. I wish I had caught it in the first place. Sorry for the noise!

@nathanstocks @steve @b0rk also worth noting that over the last 8 years there have been 3 different versions:

https://en.wikipedia.org/wiki/Unum_(number_format)

so if you had moved to type II posits (say) you'd have to migrate again. i would stay away unless you're interesting in doing your own evaluation and implementation.

Unum (number format) - Wikipedia

@nathanstocks @b0rk Posits are a scam. I've never read papers more disingenuous in my entire life.
@mbr @b0rk I’ve learned a lot from these sub threads! This paper someone linked to seems to be a pretty thorough look at the pros and cons. Posits are definitely not as advantageous as I was led to believe. Rather there are a bunch of trade offs, like many things in life. https://people.eecs.berkeley.edu/~demmel/ma221_Fall20/Dinechin_etal_2019.pdf
@nathanstocks @b0rk Posits don't live up to the hype, and the posit interval arithmetic proposal was downright broken. The primary thing posits are good at is marketing.
@nathanstocks @b0rk interesting format. I hadn't heard about that before.
How does posit distinguish between positive and negative infinity?
The exponent and fraction do not appear to use the same encoding as the regime number. How does one determine the respective lengths of these fields?
What do you do about computations that result in numbers that posit cannot represent, i.e. NaNs in floating point?
@arildsen I seriously didn't mean to hijack this pretty floating point infographic thread. 🙈 It looks like this wikipedia article is a great jumping off point for learning about posits. https://en.wikipedia.org/wiki/Unum_(number_format)
Unum (number format) - Wikipedia

@b0rk @fclc gold star for “significand” instead of “mantissa”.
@steve why do you say that? i just picked whatever term wikipedia was using, I don't have feelings about either of them
@b0rk "Mantissa" is very common, but the IEEE 754 standards (as well as Knuth and Kahan in their influential writings) have always used "significand," because "mantissa" isn't quite correct vis-a-vis it's traditional mathematical meaning when talking bout logarithms.
@b0rk Peter Cordes has an excellent summary in a comment on Stack Overflow: https://cs.stackexchange.com/a/152281
Is significand same as mantissa in IEEE754?

I'm trying to understand IEEE 754 floating point. when I try convert 0.3 from decimal to binary with online calculator, it said the significand value was 1.2 Where 1.2 come from? I did understand a...

Computer Science Stack Exchange
@b0rk TL;DR: everyone still knows what you mean if you say "mantissa", but "significand" is more correct.

@steve @b0rk
Huh, this is news to me. If I understand correctly, in the traditional usage, the difference is:

x = significand * base ^ exponent

x = base ^ (exponent + mantissa)

Do I have that right?

@inthehands @b0rk yes, up to handwaving about normalization (is significand in [1/base,1) or [1,base)?)

@steve @b0rk Right, and hidden bit etc

I wave my hands a lot, as it turns out

@steve fun fact: mantissa became more popular because “Sign of the Mantissa” was the name of the supplemental module booklet included in the 1983 revision of the Basic D&D box set
@airspeedswift @steve I boosted this a long time back but I wanted to say thanks for this titbit about D&D! It brings back memories though my exposure to D&D was really through MUDDing (I didn't know anyone who played it and I wouldn't have played it with them anyway, D&D I mean).
@b0rk this is also what causes, in lots of video game, "floating-point jitter": as some games have the origin of the play area centered at a fixed point, the more you go away from this origin, the furthest apart each floating point is. Which, at great distances, becomes visible to the naked eye!
@fen yeah! my favourite bug I've heard of like this is the Deep Space Kraken https://wiki.kerbalspaceprogram.com/wiki/Deep_Space_Kraken (because it's such a great name)
Deep Space Kraken - Kerbal Space Program Wiki

@b0rk @fen I wonder if it would have been feasible to implement the physics and coordinates in 64-bit fixed point arithmetics. Shouldn't that even be able to resolve the distance from here to Proxima Centauri with a precision of about 2 millimeters? 😁

@b0rk This is a pretty spiffy illustration! And I knew that floating point didn't store numbers evenly, but I didn't know it did store them in an even distribution within tiers (powers of 2).

And I didn't realize it was so heavily focused on the center. This might explain the differences in GIMP when I switch between 32bit integer images and 32bit floating point.

Hmmm... 32bit float EXR images are nice, but I'm re-evaluating the utility of 16bit integer PNGs.

@b0rk Now I wonder how the number distribution compares between sRGB 16bit integer and Rec. 709 32bit floating point.

Though I suppose this comparison might not be so cleanly represented by an illustration like this.

@b0rk thank you very much for your zines! It’s incredible!
@b0rk I love this. Takes me back to my CS days, but this way actually makes sense!
@b0rk And for extra confusion, some mainframes did the exponent in base 16...
@b0rk Wow, learned something new, that's really interesting. Vaguely realised that as numbers get larger, the accuracy decreases, but never thought about the maths
@b0rk The bestest and easiest to follow explanation
@b0rk I generally think I understand floating point pretty well, but I'd never quite thought of it the way the first diagram lays it out. That makes it so much more obvious.
@b0rk This is a fantastic visualization! The "2^52 numbers here" is way clearer than the way it's often explained (implicit leading 1 before the fractional bits) and makes it easier to see why the ordering property (integer compare works as float compare, mostly) works too.
@b0rk that number line illustration is brilliant, thank you!
@b0rk learning how math w/ floating point numbers works was the most difficult part of doing a CS degree for me. Didn’t help that I had those lectures at 8:30am in the dead of winter.
@b0rk what i find facinating is that in floating point numbers the first digit of the significand is not represented at all because in all numbers beside 0 the first binary digit is always 1
@b0rk I don't know what I like more: the image or the fact it's alt texted!

@b0rk 🤯

In one toot you have made me completely understand where floating point errors originate!!

Also… does this mean that using floating points for higher numbers are more error prone than for small numbers?

@b0rk what about denormalized floats? Do you have a visualization for them? I’d say that they represent even more numbers around the 0.0…
@juandesant i might make one later! Have you ever needed to know about denormalized floats? I'm trying to figure out whether explaining it is worthwhile or whether it's more of an edge case that almost nobody ever runs into.

@b0rk I think is something worth knowing about if you really need to manage calculation precision in floating point.

I learned about them for the first time reading about the SANE (Standard Apple Numeric Environment) in the Turbo Pascal for the Macintosh 1.0 manual… circa 1990!

@b0rk @juandesant Subnormals are very important! For example, fl(x - y) = 0 implies x = y (exactly!) if you have subnormals, and if y/2 < x < 2y then fl(x - y) = x - y (no rounding!).

But if you _don't_ have subnormals, you can't reason like this.

The diagram for subnormals is very simple: zoom in to the smallest and second-smallest exponent, and copy the same resolution between [2^emin, 2^{emin + 1}) into [0, 2^{emin}). Without subnormals, there's a huge gap between 0 and 2^{emin}!

@b0rk great graphic! I especially like how you show precision getting traded for magnitude.
@b0rk really great visual insight about the density of floating point number!