fixed point is taught wrong. instead of saying "to multiply two fixed point values you have to multiply them then shift them back into position" what it really should be is "when you multiply two fixed point values the result has a fractional precision in bits equal to the sum of the fractional precision of the two fixed point types multiplied"
this not only makes it easy to see *why* you have to shift (the fractional precision increased, so if you want to keep a lower one you have to shift) but it also makes it clear how multiplication between two fixed point types with different fractional precision works

if you multiply a fixed point value with 2 bits of fractional precision and one with 4 bits you wind up with a result that has 6 bits of fraction. if you want a result with 3 bits of fraction you shift right by 6-3=3 bits

this is easily understood, but only if the underlying concept is properly explained from the start

this also makes it easier to see that if you're transforming a fixed point type into another fixed point type via multiplication (like position * position = edgeweight for a rasterizer) you may want to not shift at all if you want the resulting fixed point type to represent the transformation precisely without any rounding

@eniko

I like your funny words magic fox

I understand nothing of precision almost everything I've ever done, has not needed to strictly worry about types apart from java but that felt different

I have a book on C programming on my floor I should probably get to at some point...

@eniko but more seriously.. uh this sounds cool but also legitimately.. out of my knowledge zone by quite a bit., right over my ears n tail.

@LottieVixen @eniko I know this is a rabbit hole and completely inefficient, but C may not be as helpful here as some archaic assembly coding. http://bitsavers.informatik.uni-stuttgart.de/pdf/ibm/7090/books/Saxon_Programming_the_IBM_7090_1963.pdf Saxon's tutorial for thd IBM 7090 is pretty clear with a strong focus on both fixed and floating point math at the machine code level. It's tedious bit tracking but it's methodical, practical, and straightforward and gets the ideas across without having to look past the thin abstractions C puts in the way.

Bonus: If you actually want to run the examples, simh has an IBM 7090/7094 emulator plus all the OS & tooling you might need to program like it's 1964. The Saxon book does not assume you have your own 7090 and is perfectly usable on its own. It's also a good reminder of why we use high-level languages especially where math is concerned. Floating-point at the machine code level is really tedious

@LottieVixen @eniko An example in decimal: If you represent 1.5 as 15 (with one implied fraction digit), when you multiply you get 15 × 15 = 225, or 2.25 (with two fraction digits). If you want to get back to one fraction digit you have to shift 225 right by a digit (and decide how to round the 5).

It's not as common these days because most CPUs have floating-point hardware, and also because languages like C make it a little awkward (I assume 1980s-era fixed point libraries used inline assembly to access the 32 × 32 → 64-bit multiply).

@LottieVixen @eniko I guess the other part is: If you're going to have to shift anyway, it can be almost* free to choose different number of fraction bits for your inputs/outputs: Instead of the usual s15.16 (one sign bit, 15 integer bits, 16 fraction bits; other notation exists), you could use s1.30 or even s1.14 for rotation matrixes/sine tables/etc. You can mix multiple 16-bit audio channels (s0.15 essentially) with volume controls into a s7.24 accumulator. That sort of thing.

* The big caveat is that x86/M68K give easy access to just the low 16 bits of a register, which means you can do s15.16 without actually
masking/shifting, making it faster.

@eniko I think often I've seen fixed point multiplication as being from X:Y * X:Y = X:Y, ie same precision. Increasing precision of the result means actually changing the type (which is ok like you say) but it is another operation. Maybe that is where the confusion come from from places that describe it. I like your view of it of course and it builds intuition.
@eniko Another intersting topic is that if you have 32-bits and multiply by 32-bits you get a 64-bit result. But older machines did not have 64-bits so you sometimes had to shift before doing the multiplication. But you don't have to shift the same on both factors.
@eniko yeah that just makes another kind of fixed point in the end
@eniko also a good idea to mention percentages, because they're fixed point and people use them all the time
@eniko I remember reading about fixed-point math in asm tutorials that date back to the '90s and they made this clear with gorgeous ASCII drawings. I wonder what kind of resources show up today in a search and why they fail to explain this part
@gabrielesvelto i've read about fixed point math many times on the modern internet and i don't think i've ever seen this pointed out. i had to come to it on my own and it's not super intuitive
@eniko @gabrielesvelto I seem to remember the "Graphics Gems" series of books containing a lot of great information on fixed point maths, including various useful algorithms. Unfortunately I'm pretty sure those books are long out of print (exorbitant prices anyway) - I read them at our university library over 20 years ago.
@pmdj @eniko I'm sure there must be copies on the Internet Archive
@eniko and ofc this is also important for different types of fixed-point operations, like how you can do repeated divisions by multiplying by the reciprocal of the divisor where the reciprocal is just (1 << $precision) / $divisor

@gabrielesvelto it's also very important because with fixed point you constantly have to figure out what precision you want for a particular problem domain so that you know your limits and when overflow could occur

which means different types with different levels of precision are best but nothing ever explains it in a way where a newcomer would feel comfortable mixing different fixed point types, leading to worse results

@eniko indeed. I can see how that can be a problem for something like rasterization, where you need a certain amount of subpixel precision but not necessarily as much as what you need for dealing with geometry... but you're converting between the two if your entire pipeline is fixed-point so the problem does come up in relatively common situations

@eniko oh look! Some of that stuff is still available! I haven't found the 68000 ones but those are probably in .lha files on Aminet.. In the meantime check out some classic tutorials on software rendering like fatmap.txt, fatmap2.zip or texture.txt.

https://mikro.naprvyraz.sk/docs/Coding/1/
https://www.gamers.org/dEngine/rsc/pcgpe-1.0/

This stuff is an incredible resource to understand how things were done back in the day, what were the constraints and the reasoning around them.

Index of /docs/Coding/1

@eniko hah, look at this stuff! This is a gold mine: https://flipcode.com/archives/articles.shtml
flipcode - Featured Articles

@eniko I'm bookmarking all these things and making sure they're on the Internet Archive too: https://www.modeemi.fi/drdoom/3dica/
3DICA Programming Tutorial

@eniko i want to write a small library/toy language of sort to help me deal with fixed points math, mainly for embedded projects. the main problem i found is when you also need to account for the non fractional bits to avoid overflowing the integer type, and sometimes adjusting things have a ripple effect that is a pain to deal with. some side verification and a bit of code generation would really help. too bad that lately i am constantly overworked to allocate time for that tho!

@Rk yes, fixed point requires a lot of thinking about your exact problem domain and what the limits are before overflow occurs

which is why it's so vexing that it's usually explained so poorly because you can get way better results by having many different fixed point types with different bit layouts for different tasks, precisely so you can avoid overflow

but mixing different types is basically never explained

@eniko some years ago i helped with a project in a low energy context: once in a while a sensor would trigger an interrupt, waking up the device. not all the wake ups needed an action, to figure it out you would need to do a bunch of non linear calculations and react accordingly. using floats pushed the device over the maximum energy consumption, as it was powered with non rechargeable caps. fixed point math with an iterative solver outperformed fp, doubling the expected life of the device.
@Rk nice 😌
@eniko sometimes the old techniques are the appropriate ones. too bad juniors are actively told that dealing with bits and low level concepts is not relevant and 'premature optimizaiton'. one of my clients has engineers with a terrible fear of exoteric concepts such as 'bit masks'... such is life!