So I really want to get the latest MINIX 3 source code building in a VirtualBox VM, and it's not simple. First, you have to install the latest development snapshot.

Then you have to somehow figure out what features of VirtualBox to turn off in order to make MINIX stable. So far, I've discovered that AHCI SATA doesn't work, 512MB RAM seems to be safe (not too much or too little to rebuild the system), it only supports 1 CPU, I/O APIC and PAE/NX are enabled, nested VT & nested paging disabled.

For MINIX 3, it looks like you also need to emulate the PIIX3 chipset, rather than the newer ICH9. I tried ICH9 and suddently DHCP wasn't resolving DNS and the Internet wasn't working. I have no idea why that is. I'm using the same old PCnet-FAST III (NAT) either way, emulating the universally-supported Am79C973 Ethernet chip (PDF datasheet):

https://www.amd.com/content/dam/amd/en/documents/archived-tech-docs/datasheets/21510.pdf

So now it's building /us/src the slowest I've seen so far but hopefully without weird build failures? We shall soon find out.

So here's a story: at work, I'm one of two software engineers working on a plug-in board to handle NIC switch capabilities. We're using a Marvell SOHO router on a chip that can't talk to any x86 module (everyone always uses ARM or MIPS), and it's good to have a dedicated CPU for networking so here we go.

The nice thing about the Marvell switch is you can set up VLANs and do basic routing and it doesn't involve the CPU. Otherwise, a 1 GHz 32-bit Cortex-A is actually a bottleneck for 1 Gbps.

Not a huge bottleneck, but it ends up tapping out at around 600-750 Mbps in either direction. I need to file a patch to the Linux kernel list to back out an "optimization" in the Cadence macb driver that turns off the hardware Ethernet checksum feature in order to save ??? and then takes up valuable CPU time to do an Ethernet frame checksum that the hardware would otherwise handle. The upshot is faster sending from the ARM by always using the Ethernet's onboard checksum instead of using the CPU.

My colleague found out about that broken optimization from someone who worked on the Linux kernel who pointed him at a GitHub repo that had some patches to the kernel for a particular MIcrochip switch, and when we applied just the part involving the checksum, it magically got faster.

I wanted to make sure I understood why it made sense to reverse the commit that was trying to optimize the throughput but failing, and I did.

Sometimes writing the commit is the hardest part of submitting a patch.

That's not the interesting part. What was really unfortunate with an earlier version of the PCB we were testing with was that the CPU would start panicking with "vm page not found" errors that seemed to be correlated with bad RAM, based on Google searches anyway. The only way to make it panic was under the load of "iperf3", a client/server TCP/IP throughput tool that works quite nicely (so does "nuttcp").

Dropping the max CPU speed to 800 MHz made the panics go away. Hmm.

We had a couple different hypotheses and put them all into effect for what we hoped would be (and appears to be) the final version of the NIC board.

One theory was that the RGMII Ethernet signals @ 125 MHz DDR were interfering with the DDR3L RAM signals. So those lines got routed to be on different layers with a ground layer in between. I think one of the chips got rotated to shorted the paths.

Another theory was that the voltages where too variable. SAMA7 seems to have low voltage tolerance.

As you can imagine, we were all both hopeful and anxious that the revised boards that arrived just before Christmas would work reliably, and long story short, they did. 90 MHz to 1 GHz, stable, no problem.

I'm learning a lot to increase my confidence in hardware design and debugging on the job.

Well, it looks like I appeased the MINIX microkernel god(desse)s and my build is continuing to proceed.

Folding@Home defaults to using 1 less thread than the # of threads your CPU has, which typically gets rounded down to the nearest even number from what I've seen. So it's using 14 CPU threads, plus the GPU, leaving 1+ CPU thread free for a MINIX VM.

There's a thermal budget the active CPU cores have to consider, but I'm not in a hurry, as long as the build builds. 🀞

#MINIX #VirtualBox

This "current" version of MINIX hasn't been touched since Nov 14, 2018 (just over 6 years).

I've worked with much older codebases. And it supports ELF shared libraries, which is a huge improvement over having to statically link things.

It comes with X11 and a NetBSD-derived package system. Instead of MINIX's original "Amsterdam Compiler Kit", it's now using Clang 3.6, which is old but seems to fit well in 256 MB of RAM (I've set the VM to 512 MB to be safe, and to cache disk blocks).

@jhamby
I'm thinking back to the days of running BSD 4.2 on a VAX 11/780, with 1 MiB of RAM, later upgraded (at great expense) to 2 MiB.
Gee, our old LaSalle ran great!
But I don't at all want to give up the modern conveniences that are requiring three orders of magnitude more memory.