When it comes to code:
* 0.5
/ 2
Poll ended at .
@RosaCtrl Multiplication by a floating point may reduce precision when the value is an integer. Double precision floating point can represent signed integers up to 53 bits, then you'll lose low bits.
Compilers are generally smart enough to optimise division by a constant anyway.