New blog post! I really hope I'll get a decent amount of people mad with this one 😈😈😈
It's OK to compare floating-points for equality:
https://lisyarus.github.io/blog/posts/its-ok-to-compare-floating-points-for-equality.html
New blog post! I really hope I'll get a decent amount of people mad with this one 😈😈😈
It's OK to compare floating-points for equality:
https://lisyarus.github.io/blog/posts/its-ok-to-compare-floating-points-for-equality.html
@lisyarus OK, let's get started 8-D
1. when an epsilon check *is* made, the epsilon should never be arbitrary, but at the very least be based on machine epsilon and the the magnitude of the numbers involved (e.g. something like fabs(x - y) < (x+y)*FLT_EPSILON/2 rather than fabs(x-y) < 1.e-4, unless additional information is known about the process by which the numbers are obtained.
2. I like the recommendation to use hypot for the vector length, but it might be worth mentioning that for maximum accuracy it's preferrable to sort the components, or at the very least identify which one is largest. Computationally, this spares one (expensive) division and one multiplication at the expense of two swaps, but adding the lower terms squared to 1 is more consistently accurate than the three squared terms added in arbitrary order.
2bis. also I don't think anyone ever brought that up as an example in which to use an arbitrary epsilon, but that's a different matter 8-D
3. the Gauss–Jordan elimination argument is *really* flaky. GJ is just an extremely poor algorithm in general, and should not be used at all, but replacing the epsilon check with a 0 check doesn't really help in its application. “Very imprecise” can be quite catastrophic depending on applications.
4. the user input case really brings out the fact that main pain point with epsilons isn't even the epsilons themselves, but the fact that they are chosen *arbitrarily*; however, this is a very different matter compared to “don't use them when they aren't needed”. (yes this connects to point 1. above)
THE END.