if you multiply a fixed point value with 2 bits of fractional precision and one with 4 bits you wind up with a result that has 6 bits of fraction. if you want a result with 3 bits of fraction you shift right by 6-3=3 bits
this is easily understood, but only if the underlying concept is properly explained from the start
I like your funny words magic fox
I understand nothing of precision almost everything I've ever done, has not needed to strictly worry about types apart from java but that felt different
I have a book on C programming on my floor I should probably get to at some point...
@LottieVixen @eniko I know this is a rabbit hole and completely inefficient, but C may not be as helpful here as some archaic assembly coding. http://bitsavers.informatik.uni-stuttgart.de/pdf/ibm/7090/books/Saxon_Programming_the_IBM_7090_1963.pdf Saxon's tutorial for thd IBM 7090 is pretty clear with a strong focus on both fixed and floating point math at the machine code level. It's tedious bit tracking but it's methodical, practical, and straightforward and gets the ideas across without having to look past the thin abstractions C puts in the way.
Bonus: If you actually want to run the examples, simh has an IBM 7090/7094 emulator plus all the OS & tooling you might need to program like it's 1964. The Saxon book does not assume you have your own 7090 and is perfectly usable on its own. It's also a good reminder of why we use high-level languages especially where math is concerned. Floating-point at the machine code level is really tedious
@LottieVixen @eniko An example in decimal: If you represent 1.5 as 15 (with one implied fraction digit), when you multiply you get 15 × 15 = 225, or 2.25 (with two fraction digits). If you want to get back to one fraction digit you have to shift 225 right by a digit (and decide how to round the 5).
It's not as common these days because most CPUs have floating-point hardware, and also because languages like C make it a little awkward (I assume 1980s-era fixed point libraries used inline assembly to access the 32 × 32 → 64-bit multiply).
@LottieVixen @eniko I guess the other part is: If you're going to have to shift anyway, it can be almost* free to choose different number of fraction bits for your inputs/outputs: Instead of the usual s15.16 (one sign bit, 15 integer bits, 16 fraction bits; other notation exists), you could use s1.30 or even s1.14 for rotation matrixes/sine tables/etc. You can mix multiple 16-bit audio channels (s0.15 essentially) with volume controls into a s7.24 accumulator. That sort of thing.
* The big caveat is that x86/M68K give easy access to just the low 16 bits of a register, which means you can do s15.16 without actually
masking/shifting, making it faster.
@gabrielesvelto it's also very important because with fixed point you constantly have to figure out what precision you want for a particular problem domain so that you know your limits and when overflow could occur
which means different types with different levels of precision are best but nothing ever explains it in a way where a newcomer would feel comfortable mixing different fixed point types, leading to worse results
@eniko oh look! Some of that stuff is still available! I haven't found the 68000 ones but those are probably in .lha files on Aminet.. In the meantime check out some classic tutorials on software rendering like fatmap.txt, fatmap2.zip or texture.txt.
https://mikro.naprvyraz.sk/docs/Coding/1/
https://www.gamers.org/dEngine/rsc/pcgpe-1.0/
This stuff is an incredible resource to understand how things were done back in the day, what were the constraints and the reasoning around them.
@Rk yes, fixed point requires a lot of thinking about your exact problem domain and what the limits are before overflow occurs
which is why it's so vexing that it's usually explained so poorly because you can get way better results by having many different fixed point types with different bit layouts for different tasks, precisely so you can avoid overflow
but mixing different types is basically never explained