the gaps between floats
“there are only 2^64 floating point numbers, of course they’re weird, it’s a miracle that it even works at all” was such a big a-ha moment for me the first time someone pointed this out to me
@b0rk the pic in your explainer of the number line was super helpful. thanks.
@b0rk yeah same -- it's easy to get frustrated with all the weirdness but when you consider the range it supports versus 64-bit ints it is really amazing ✨
@b0rk 2^64 is such a big number that it seems close to infinity, but it's actually quite far

@SeaRyanC I find it kind of interesting that 2^64 nanoseconds is about 600 years

which is a long time, but like Go's `time.Time` is defined to start at Jan 1 1885 https://cs.opensource.google/go/go/+/refs/tags/go1.20.3:src/time/time.go;l=141, so it'll stop working around the year 2500

2500 is very far in the future, but like not “death of the solar system" far in the future

@b0rk @SeaRyanC it beats early Unix variants run out of time in 2038
@b0rk @SeaRyanC all of us: "The execs will definitely have stopped arguing about the cost of upgrading the system by 2500, right?"

@b0rk @SeaRyanC a rough estimate i like is 2^25 seconds in a year

(it’s like 6% too many seconds, but handy for guesstimating if a data type is big enough for my requirements)

@fanf @b0rk @SeaRyanC it’s less useful to know, but a year is also very nearly pi*10^7 seconds (within half a percent)!
@fanf @b0rk @SeaRyanC (this is the only thing I remember from the advanced lab class I had to take for my physics degree)
@chrisvermilion @fanf @b0rk @SeaRyanC Similar physics approximation (only useful in certain countries): 1 m/s \approx \sqrt{5} mi/h (off by about 0.04%).
@chrisvermilion @fanf @b0rk @SeaRyanC (Helpful if you want to talk about acceleration of a modern train, always quoted in m/s/s, with an American Olde Tyme Railroader who only understands mi/h/s, which is the context in which I learned this: 3600/1609.344 is ~2.236936, which is instantly recognizable as being about root 5 (~2.236068).)
@fanf @b0rk @SeaRyanC The number of living people on the planet doesn't quite fit into a uint32.
@b0rk
Quite often it doesn't work very well -for example, the roots of high order polynomials

@b0rk One of my favourite descriptions is that "they resemble badly behaved integers" 😄

https://97-things-every-x-should-know.gitbooks.io/97-things-every-programmer-should-know/content/en/thing_33/

Floating-point Numbers Aren't Real · 97 Things Every Programmer Should Know

@b0rk 232 actually, if you stick to plain ol' single-float 😅​
@b0rk
I think we should call all the rest sunken point numbers. 😉
(Or maybe it should be fixed point and broken point. 😉 )
@b0rk Have you seen this video: https://www.youtube.com/watch?v=nYDmBdUalgo
It includes a nice explanation of floating point numbers.
How to Crash SM64 Using a Pendulum (Commentated)

YouTube
@b0rk
This is why I always use strings to avoid weird memory issues.
@unikitty @b0rk Strings?
@moffintosh @b0rk
I think this was a Javascript joke, but maybe it was a Javascript serious. It was too long ago and I have forgotten.
@b0rk this is literally the first time I've understood the problem, thanks so much!
@b0rk and then you learn there aren't even *that* many because the exponent field uses two of it's values (all 0s or all 1s) for "special" numbers, so out of the 2^53 possible values those exponents could represents,, only a small handful are used.

@b0rk There are a countably infinite number of floating point numbers (ℵ0), and an infinitely larger number of irrational numbers.

You can fire up any Common #Lisp implementation you like right now and do this:

(* (factorial 100) pi)
2.931929528260332*10^158

or this:

(/ (factorial 25) (expt 2 70))
3698160658676859375/281474976710656

Do not allow yourself to be limited by obsolete technology.

@simon_brooke wrote: "There are a countably infinite number of floating point numbers (ℵ0)"

Sure, if your computer has a countably infinite amount of memory. 🙂

@dfs_comedy
Ish. You're never going to be able to represent all of them. But the idea that the highest precision you can achieve is limited to 64 bits (eight bytes) on computers which have eight gigabytes or more of store is bizarre, archaic thinking - industrial archaeology, not computer science.
@simon_brooke Well, yeah. There are arbitrary-precision math packages out there. But for many purposes, they are impractically slow, so we make do with small floating-point ranges.

@dfs_comedy For many purposes, small floating point ranges are impractically imprecise.

As to slow, on my six year old desktop PC:

* (time (/ (factorial 1000) 22/7))
Evaluation took:
0.000 seconds of real time
0.000167 seconds of total run time (0.000152 user, 0.000015 system)
100.00% CPU
599,328 processor cycles
486,592 bytes consed

That's a rational number approximately equal to 4'0238726E2567 - and it's *exact*, with perfect precision.

@simon_brooke Those benchmarks are meaningless. A lot of workloads make millions or billions of floating-point calculations (3D rendering, scientific computations) and so using arbitrary-precision arithmetic is impractically slow for those cases, and also typically unnecessary. There are plenty of well-known techniques to mitigate the limited range and precision of floating-point arithmetic in practical applications.

Can you name a real-world application where such high precision is required?

@b0rk The most fun thing I know about this space (along with all the other weird minority number systems like interval systems, posits and residue systems) is that there's a finite rational-subset residue-like system with a different metric (finite-segment p-adics, or "Hensel codes") that has no rounding errors on its main arithmetic operations (as with all of them it has other problems, like conversion to the standard fraction form and ordering under standard metric are hard, but .. still .. so compelling): https://books.google.ca/books?id=HiLSBwAAQBAJ&lpg=PA61&pg=PA63#v=onepage&q&f=false

I think some quantity of it is implemented in https://github.com/davidwilliam/hensel_code

(All this stuff makes FP experts' blood boil because they're busy fixing actual problems in actual arithmetic people actually use rather than chasing fantasy arithmetic that happens to be wildly inefficient or undefined in important use cases. Ah well.)

Methods and Applications of Error-Free Computation

This book is written as an introduction to the theory of error-free computation. In addition, we include several chapters that illustrate how error-free com putation can be applied in practice. The book is intended for seniors and first year graduate students in fields of study involving scientific computation using digital computers, and for researchers (in those same fields) who wish to obtain an introduction to the subject. We are motivated by the fact that there are large classes of ill-conditioned problems, and there are numerically unstable algorithms, and in either or both of these situations we cannot tolerate rounding errors during the numerical computations involved in obtaining solutions to the problems. Thus, it is important to study finite number systems for digital computers which have the property that computation can be performed free of rounding errors. In Chapter I we discuss single-modulus and multiple-modulus residue number systems and arithmetic in these systems, where the operands may be either integers or rational numbers. In Chapter II we discuss finite-segment p-adic number systems and their relationship to the p-adic numbers of Hensel [1908]. Each rational number in a certain finite set is assigned a unique Hensel code and arithmetic operations using Hensel codes as operands is mathe matically equivalent to those same arithmetic operations using the cor responding rational numbers as operands. Finite-segment p-adic arithmetic shares with residue arithmetic the property that it is free of rounding errors.

Google Books
@graydon @b0rk but does their blood boil at 100°C or 100.00000000000022204°C
@b0rk and also the real number line that they are trying to emulate is not only infinite, it is infinitely bigger than the natural numbers.
@b0rk My mind is duly blown.