Why does 0.1 + 0.2 = 0.30000000000000004?

Why does 0.1 + 0.2 = 0.30000000000000004?

Julia Evans
@b0rk rember this from times with 16-bit int and the performance hit for floating point calculations. C code with short int, int and long int and migrations from 32-bit to 64-bit int as default that crasched programs.