When it comes to code:
* 0.5
19.2%
/ 2
80.8%
Poll ended at .

@RosaCtrl Multiplication by a floating point may reduce precision when the value is an integer. Double precision floating point can represent signed integers up to 53 bits, then you'll lose low bits.

Compilers are generally smart enough to optimise division by a constant anyway.

@mlen insightful! But what do you like to see?
@RosaCtrl Depends on what is more readable in a given context, but usually division.