today I'm thinking about the "don't use floating point for money" advice I hear all the time. It obviously has a lot of truth to it.

But -- Excel/Google Sheets uses floating point for all of its calculations, people use spreadsheets for money calculations all the time, and it generally seems to work just fine -- the results get rounded for display.

So I'm trying to figure out if there's a more nuanced guideline than "never use floating point for money".

@b0rk As I understand this advice, proper bookkeeping abhors rounding - and truncating!

Traditional money isn't infinitely divisible, you can't pay someone 1/4th of a cent - it's not allowed (in the U.S. a half-cent did exist tho).

But worse, a float will just start losing money if the amount gets too large - it has limited precision: 1e16 + 1 == 1e16.

So using a data type which will at times round or truncate behind our backs and without warning, is just incompatible with sound bookkeeping?

@b0rk Also for context, I think this advice dates from when you ran the risk of a float being relatively small.

A single precision float (32 bits) only has 24 bits of precision, which is small enough that you will run into trouble with real life amounts pretty quickly.