@SeaRyanC I find it kind of interesting that 2^64 nanoseconds is about 600 years
which is a long time, but like Go's `time.Time` is defined to start at Jan 1 1885 https://cs.opensource.google/go/go/+/refs/tags/go1.20.3:src/time/time.go;l=141, so it'll stop working around the year 2500
2500 is very far in the future, but like not “death of the solar system" far in the future
@b0rk One of my favourite descriptions is that "they resemble badly behaved integers" 😄
single-float 😅@b0rk There are a countably infinite number of floating point numbers (ℵ0), and an infinitely larger number of irrational numbers.
You can fire up any Common #Lisp implementation you like right now and do this:
(* (factorial 100) pi)
2.931929528260332*10^158
or this:
(/ (factorial 25) (expt 2 70))
3698160658676859375/281474976710656
Do not allow yourself to be limited by obsolete technology.
@simon_brooke wrote: "There are a countably infinite number of floating point numbers (ℵ0)"
Sure, if your computer has a countably infinite amount of memory. 🙂
@dfs_comedy For many purposes, small floating point ranges are impractically imprecise.
As to slow, on my six year old desktop PC:
* (time (/ (factorial 1000) 22/7))
Evaluation took:
0.000 seconds of real time
0.000167 seconds of total run time (0.000152 user, 0.000015 system)
100.00% CPU
599,328 processor cycles
486,592 bytes consed
That's a rational number approximately equal to 4'0238726E2567 - and it's *exact*, with perfect precision.
@simon_brooke Those benchmarks are meaningless. A lot of workloads make millions or billions of floating-point calculations (3D rendering, scientific computations) and so using arbitrary-precision arithmetic is impractically slow for those cases, and also typically unnecessary. There are plenty of well-known techniques to mitigate the limited range and precision of floating-point arithmetic in practical applications.
Can you name a real-world application where such high precision is required?
@b0rk The most fun thing I know about this space (along with all the other weird minority number systems like interval systems, posits and residue systems) is that there's a finite rational-subset residue-like system with a different metric (finite-segment p-adics, or "Hensel codes") that has no rounding errors on its main arithmetic operations (as with all of them it has other problems, like conversion to the standard fraction form and ordering under standard metric are hard, but .. still .. so compelling): https://books.google.ca/books?id=HiLSBwAAQBAJ&lpg=PA61&pg=PA63#v=onepage&q&f=false
I think some quantity of it is implemented in https://github.com/davidwilliam/hensel_code
(All this stuff makes FP experts' blood boil because they're busy fixing actual problems in actual arithmetic people actually use rather than chasing fantasy arithmetic that happens to be wildly inefficient or undefined in important use cases. Ah well.)
This book is written as an introduction to the theory of error-free computation. In addition, we include several chapters that illustrate how error-free com putation can be applied in practice. The book is intended for seniors and first year graduate students in fields of study involving scientific computation using digital computers, and for researchers (in those same fields) who wish to obtain an introduction to the subject. We are motivated by the fact that there are large classes of ill-conditioned problems, and there are numerically unstable algorithms, and in either or both of these situations we cannot tolerate rounding errors during the numerical computations involved in obtaining solutions to the problems. Thus, it is important to study finite number systems for digital computers which have the property that computation can be performed free of rounding errors. In Chapter I we discuss single-modulus and multiple-modulus residue number systems and arithmetic in these systems, where the operands may be either integers or rational numbers. In Chapter II we discuss finite-segment p-adic number systems and their relationship to the p-adic numbers of Hensel [1908]. Each rational number in a certain finite set is assigned a unique Hensel code and arithmetic operations using Hensel codes as operands is mathe matically equivalent to those same arithmetic operations using the cor responding rational numbers as operands. Finite-segment p-adic arithmetic shares with residue arithmetic the property that it is free of rounding errors.