wizard zines

wizard zines
@b0rk This goes against literally everything I was taught in CS about using the right data type for your data.
@b0rk Like yeah, if you *think* you're going to need more than 4 billion integers, sure, use a 64-bit int, but why waste the memory if you're setting up a for loop or working with 32-bit color or 16-bit audio or screen coordinates...
@b0rk General advice when programming is pretty terrible for this reason and I'm kind of disappointed to see this making the rounds tbh.
@b0rk Like, I admit I'm coming at this from a game dev perspective, not a general software dev perspective, but I don't honestly know the last time I've used or felt the need to use a 64-bit integer in my work, or had to deal with an overflow because I didn't.
@dizzy @b0rk Memory, cache, bandwidth, alignment, disk storage etc... making everything 64-bit instead of 32-bit has impact on a lot more than just the maximum range.
@Scali @b0rk Yeah, in a *bad* way. Cache and alignment are handled by your compiler, and if you're trying to outsmart your compiler you're doing something wrong. Memory and disk space... I don't even know what you're trying to say there. 64-bit data uses more of both.
@Scali @b0rk Ditto bandwidth for the latter, at that. And if you're trying to take memory/disk access *speed*... well, I can't speak to the former (though I'm probably going to file that under 'your compiler knows better than you how to optimize for this' as well), and disk speed... should literally have nada effect 32 or 64-bit. Disk accesses are done in ~4kb blocks regardless, and there's nothing about SATA or PCIe that favors 64-bit over 32-bit access.
@dizzy @b0rk If you only access one value. But if you have arrays of 32-bit values, and change them to 64-bit, obviously you'll cross boundaries sooner. Also, the "your compiler knows better" was already debunked in the 70s... and 80s, 90s, 2000s, 2010s... You get the idea.

@Scali @b0rk Debunked by who? When? Sources? Because I've been coding and for that matter surrounded by the advice of much better coders than me since the 90s and "don't try to out-optimize your compiler" has *always* been the advice. More so these days than ever.

Sure, it probably was "debunked in the 70s". News flash man, we don't live in the 70s. Compilers are getting smarter and better at optimization all the time.

One of my friends points out your cache argument is completely backwards. CPUs use cache *better* when data is more tightly packed.

@dizzy @b0rk maybe more about,

What is right

In the older days we where restricted in memory, today computers are larger and handle 32/64 much easier, since registers are often the same sizes

So I guess we end up optimising for NOT having to go back, change programs and write data migration programs?

@dizzy @b0rk a lot of lesson plans are written once and not really touched except where necessary

That, and the CS field has a sizable rift between academia and practicianers, so educators can be out of date

This is of course colored by my experience, but it is very unusual for me to see someone use a `short` in code (a C/C++ compiler will generally promote it to an `int` at the first opportunity anyway), though `uint16_t` in an interface happens where appropriate