Big-Endian Testing with QEMU

> When programming, it is still important to write code that runs correctly on systems with either byte order

What you should do instead is write all your code so it is little-endian only, as the only relevant big-endian architecture is s390x, and if someone wants to run your code on s390x, they can afford a support contract.

Outsourcing endianness pain to your customers is an easy way to teach them about segfaults and silent data corruption. s390x is niche, endian bugs are not.

Network protocols and file formats still need a defined byte order, and the first time your code talks to hardware or reads old data, little-endian assumptions leak all over the place. Ignoring portability buys you a pile of vendor-specific hacks later, because your team will meet those 'irrelevant' platforms in appliances, embedded boxes, or somebody else's DB import path long before a sales rep waves a support contract at you.

Not sure why you consider that to be an issue, if you need to interact with a format that specifies values to be BE, just always byte-swap. And every appliance/embedded box i had to interact with ran either x86 or some flavour of 32-bit arm (in LE mode, of course).

Assuming an 8-bit byte used to be a "vendor specific hack." Assuming twos complement integers used to be a "vendor specific hack." When all the 36-bit machines died, and all the one's complement machines died, we got over it.

That's where big endian is now. All the BE architectures are dying or dead. No big endian system will ever be popular again. It's time for big endian to be consigned to the dustbin of history.

> No big endian system will ever be popular again

Cries in 68k nostalgia