Big-Endian Testing with QEMU
Big-Endian Testing with QEMU
> When programming, it is still important to write code that runs correctly on systems with either byte order
What you should do instead is write all your code so it is little-endian only, as the only relevant big-endian architecture is s390x, and if someone wants to run your code on s390x, they can afford a support contract.
It goes without saying that all binary network protocols should document their byte order, and that if you're implementing a protocol documented as big endian you should use ntohl and friends to ensure correctness.
However if designing a new network protocol, choosing big endian is insanity. Use little endian, skip the macros, and just add
#ifndef LITTLE_ENDIAN
#error
And honestly at this point it's mostly a historical artifact, if we write that kind of stuff then sure we need to care but to produce modern stuff is a honestly massive waste of time at this point.
FWIW I doing hobby-stuff for Amiga's (68k big-endian) but that's just that, hobby stuff.
The linked to blog post in the OP explains this better IMHO [0]:
If the data stream encodes values with byte order B, then the algorithm to decode the value on computer with byte order C should be about B, not about the relationship between B and C.
The point (for me) is not that your code runs on a s390, it is that you abstract your personal local implementation details from the data interchange formats. And unfortunately almost all of the processors are little, and many of the popular and unavoidable externalization are big...
[0] https://commandcenter.blogspot.com/2012/04/byte-order-fallac...
[1] https://github.com/apple/darwin-xnu/blob/main/EXTERNAL_HEADE...
Outsourcing endianness pain to your customers is an easy way to teach them about segfaults and silent data corruption. s390x is niche, endian bugs are not.
Network protocols and file formats still need a defined byte order, and the first time your code talks to hardware or reads old data, little-endian assumptions leak all over the place. Ignoring portability buys you a pile of vendor-specific hacks later, because your team will meet those 'irrelevant' platforms in appliances, embedded boxes, or somebody else's DB import path long before a sales rep waves a support contract at you.
Assuming an 8-bit byte used to be a "vendor specific hack." Assuming twos complement integers used to be a "vendor specific hack." When all the 36-bit machines died, and all the one's complement machines died, we got over it.
That's where big endian is now. All the BE architectures are dying or dead. No big endian system will ever be popular again. It's time for big endian to be consigned to the dustbin of history.
> No big endian system will ever be popular again
Cries in 68k nostalgia
Don't ignore endianness. But making little endian the default is the right thing to do, it is so much more ubiquitous in the modern world.
The vast majority of modern network protocols use little endian byte ordering. Most Linux filesystems use little endian for their on-disk binary representations.
There is absolutely no good reason for networking protocols to be defined to use big endian. It's an antiquated arbitrary idea: just do what makes sense.
> When programming, it is still important to write code that runs correctly on systems with either byte order
I contend it's almost never important and almost nobody writing user software should bother with this. Certainly, people who didn't already know they needed big-endian should not start caring now because they read an article online. There are countless rare machines that your code doesn't run on--what's so special about big endian? The world is little endian now. Big endian chips aren't coming back. You are spending your own time on an effort that will never pay off. If big endian is really needed, IBM will pay you to write the s390x port and they will provide the machine.
> There are countless rare machines that your code doesn't run on--what's so special about big endian?
One difference is that when your endian-oblivious code runs on a BE system, it can be subtly wrong in a way that's hard to diagnose, which is a whole lot worse than not working at all.
There are a lot of odd (by modern standards) machines out there.
You're also the most likely person to try to run your code on an 18 bit machine.
You are correct, honestly, I couldn't disagree more with th article. At this point I can't imagine why it's important.
It's also increasingly hard to test. Particularly when you have large expensive testsuites which run incredibly slowly on this simulated machines.
For Rust we have Loom [0], but do not expect for it to work on your whole application.
If you're using Go on GitHub (and doing stuff where this actually matters) adding this to your CI can be as simple as this: https://github.com/ncruces/wasm2go/blob/v0.3.0/.github/workf...
On Linux it's really as simple as installing QEMU binfmt and doing:
GOARCH=s390x go testI wrote a similar post [1] some 16 years ago. My solution back then was to install Debian for PowerPC on QEMU using qemu-system-ppc.
But Hans's post uses user-mode emulation with qemu-mips, which avoids having to set up a whole big-endian system in QEMU. It is a very interesting approach I was unaware of. I'm pretty sure qemu-mips was available back in 2010, but I'm not sure if the gcc-mips-linux-gnu cross-compiler was readily available back then. I suspect my PPC-based solution might have been the only convenient way to solve this problem at the time.
Thanks for sharing it here. It was nice to go down memory lane and also learn a new way to solve the same problem.