we talked last week about what can go wrong with floating point numbers, so -- what can go wrong when using integers?

so far I have:

* 32 bit integers are smaller than you think (they only go up to 4 billion!)
* overflow
* sometimes you need to switch the byte order
* ?? (maybe something about shift / bitwise operations? not sure what can go wrong with that exactly)

I'd especially love real-world examples of things that have gone wrong, if you have them!

@b0rk problems/mistakes when casting (eg, truncation or signed/unsigned). not handling signed ints correctly when doing bit manipulation (eg, masking bits for truncation)
@bnewbold are there good reasons to do bit manipulation on signed integers? (what are they?)
@b0rk I don't have a ref to a real example off the top of my head, but as a possible example, constructing a UDP datagram header length field.
start accidentally with a default 32-bit int and sum bytes. payload is huge and you roll over 2 billion; undefined but might appear negative. verify that payload is less than 2^16 (it is, because negative), then bit mask low 16 bits and shift in to header alignment
@b0rk I think the more common issue is that one thinks they are manipulating an unsigned int, or assuming that a signed value is safely in positive range, but it isn't
@bnewbold yeah that makes sense. someone else mentioned that some language don’t have unsigned ints
@b0rk the most common times I end up doing bitwise "mask and shift" on integers are protocol headers, binary file formats, and embedded hardware register interaction, in C. but then sometimes I do a hack in python and forget when integers are signed/unsigned, etc, and and up using a library like 'struct' instead of figuring out how builtin types work
@b0rk @bnewbold One of them is fast multiplication/division by powers of two.