today I'm thinking about the "don't use floating point for money" advice I hear all the time. It obviously has a lot of truth to it.

But -- Excel/Google Sheets uses floating point for all of its calculations, people use spreadsheets for money calculations all the time, and it generally seems to work just fine -- the results get rounded for display.

So I'm trying to figure out if there's a more nuanced guideline than "never use floating point for money".

thinking about trying to define a "safe zone" and a "danger zone" for floating point. maybe something like:

safe zone:
* all integer values (like 1.0, 234.0) behave 100% exactly the way you'd expect, UNLESS (!!!) they're more than 2^52. You can check for equality, it's fine.
* adding up ~100 numbers and rounding the result to 4 decimal places or so is going to work fine, as long as the numbers are roughly the same size

I think it's interesting to talk about floating point's "safe zone" (things you can do with floating point that are Completely 100% Fine Actually) because I think sometimes folks see that floating point is weird and kind of... overreact and treat it as a Magical Thing that could unexpectedly break at any time.
someone pointed out that this "never use floats for money" advice probably comes from the time of 32-bit floats, which have WAY less precision than 64-bit floats (8 digits instead of 16!) and are VERY VERY bad to use for money: you start losing 1 cent of accuracy around $100,000!!
@b0rk I’m not sure if it applies to money, but some operations are not well defined in the spec, so you get tiny cumulative differences rather than determinism from one platform to the next. This is a problem in gamedev when you want to send minimal data (like input) between players and have each machine compute the full changes to the game world the same way. Differences in float implementations between player’s machines are a big gotcha for desynchronisation bugs. https://randomascii.wordpress.com/2013/07/16/floating-point-determinism/
Floating-Point Determinism

Is IEEE floating-point math deterministic? Will you always get the same results from the same inputs? The answer is an unequivocal “yes”. Unfortunately the answer is also an unequivocal “no”. I’m a…

Random ASCII - tech blog of Bruce Dawson

@b0rk So the safe zone really depends on what you’re trying to do, compiler settings, runtime settings, machine, etc.

I have seen one suggestion for float determinism which keeps all values in a range and rounds to a specific precision after each operation. It’s essentially like 64-bit float but only utilising the 58 of the bits you know will be accurate on all platforms (the frequent rounding prevents cumulative errors cascading into more significant bits).

I expect cumulative errors cascading into more significant bits would be a problem for money with 64-bit float too. Although this may be fine if you rounded after every operation and the numbers weren’t too big. At some point there’s just more representable numbers and fewer edge cases to worry about with fixed-point math.