we talked last week about what can go wrong with floating point numbers, so -- what can go wrong when using integers?

so far I have:

* 32 bit integers are smaller than you think (they only go up to 4 billion!)
* overflow
* sometimes you need to switch the byte order
* ?? (maybe something about shift / bitwise operations? not sure what can go wrong with that exactly)

I'd especially love real-world examples of things that have gone wrong, if you have them!

@b0rk
* underflow
* signed and unsigned may act differently
* loss of precision if the order of operations decreases the number of bits in the the intermediate results
@gdinwiddie can you say more about "signed and unsigned may act differently"? (how?)

@b0rk
It's been a long time since I've worked at that level, but signed integers typically presume a twos-complement representation; unsigned not. That can affect shift operations, arithmetic between operands of different types, and probably other things, depending on how they're implemented.

If you divide by two by shifting right, you want to shift in a zero for unsigned, and repeat the sign bit for signed, for example.

@gdinwiddie why would you shift an signed integer? i always assumed shift was only used on unsigned integers
@b0rk @gdinwiddie arithmetic right shift can divide sign numbers by 2 and keep sign. Left shift multiply by 2 with no prb.
@b0rk @gdinwiddie arithmetic right shift ire-nsert the sign bit. While logical right shift always insert a 0.