we talked last week about what can go wrong with floating point numbers, so -- what can go wrong when using integers?

so far I have:

* 32 bit integers are smaller than you think (they only go up to 4 billion!)
* overflow
* sometimes you need to switch the byte order
* ?? (maybe something about shift / bitwise operations? not sure what can go wrong with that exactly)

I'd especially love real-world examples of things that have gone wrong, if you have them!

@b0rk This is mostly specific to C/C++, but in those languages signed overflow is undefined behavior, so simple addition can misbehave in all kinds of ways sometimes if the numbers get too big or too small.
@nelhage i didn’t realize it was undefined behaviour!

@b0rk @nelhage it really shouldn't be, given that the practical behaviour is identical on 99% of target ISAs, but here we are.

this undefined status also causes some pretty horrible nightmares with overflow checks being optimised out by the compiler, since to a lot of compiler devs "undefined" means "an excuse for an optimiser pass"

@b0rk @gsuberland @nelhage this is better in C23, all implementations are now required to use two complement for signed integers, so there is a lot less things left undefined now

@gsuberland @b0rk @nelhage It's more complex than just how it behaves on target ISAs. If signed-overflow is defined, then the compiler can't tell whether

for (int i = 0; i < max_count; i++) { ... }

will terminate (because i++ might make i smaller!), which blocks certain optimizations. So defining signed-overflow would slow down lots of programs that never overflow (!)

It's a nasty mess with no good way out, like many aspects of C/C++.