if you multiply a fixed point value with 2 bits of fractional precision and one with 4 bits you wind up with a result that has 6 bits of fraction. if you want a result with 3 bits of fraction you shift right by 6-3=3 bits
this is easily understood, but only if the underlying concept is properly explained from the start
I like your funny words magic fox
I understand nothing of precision almost everything I've ever done, has not needed to strictly worry about types apart from java but that felt different
I have a book on C programming on my floor I should probably get to at some point...
@LottieVixen @eniko An example in decimal: If you represent 1.5 as 15 (with one implied fraction digit), when you multiply you get 15 × 15 = 225, or 2.25 (with two fraction digits). If you want to get back to one fraction digit you have to shift 225 right by a digit (and decide how to round the 5).
It's not as common these days because most CPUs have floating-point hardware, and also because languages like C make it a little awkward (I assume 1980s-era fixed point libraries used inline assembly to access the 32 × 32 → 64-bit multiply).
@LottieVixen @eniko I guess the other part is: If you're going to have to shift anyway, it can be almost* free to choose different number of fraction bits for your inputs/outputs: Instead of the usual s15.16 (one sign bit, 15 integer bits, 16 fraction bits; other notation exists), you could use s1.30 or even s1.14 for rotation matrixes/sine tables/etc. You can mix multiple 16-bit audio channels (s0.15 essentially) with volume controls into a s7.24 accumulator. That sort of thing.
* The big caveat is that x86/M68K give easy access to just the low 16 bits of a register, which means you can do s15.16 without actually
masking/shifting, making it faster.