So, I was thinking about this batshite solution I encountered a few years ago. An industrial fab facility had a SAP server, big honking beast of a machine, 256 GB of RAM, and this was 2019, so yeah, money was spent.

But the database (mysql) was ostensibly, too slow. Their SAP consultants solution?

On boot, the system would take 128 GB of RAM, and create a ramdisk mounted on /mysql. It would then copy /var/lib/mysql, or wherever MySQL was stored into /mysql, and start the mysqld with flags to use the ramdisk.

Yes, this was absolutely insane, and tremendously fragile. But I digress. Every night at 0300, the mysqld was shutdown, and /mysql was rsynced to permanent storage. So, if they lost power in the middle of the day they lost potentially hours of work that had to be manually re-entered.

The punchline to this joke is that backing up the ramdisk to disk took about 40-45 minutes. The UPS backing this server had about half that in runtime. So, it never did anything even remotely useful. In fact, it sometimes lead to some terrible corruptions that had to be painstakingly repaired. Yes, they did have a script that was triggered by the UPS going on battery. It never finished on time.

Anyways...... My brain was thinking about this, and decided..... CAN WE MAKE THIS WORK?!?!?!?!!111???/one?!//111

I think we can, but I would like to state that this is insane, and probably _really_ expensive for fairly little gain.

But hear me out. System boots from a single disk ZFS stripe, creates a RAMDISK and adds it to the zpool as a mirror. Obviously, this would only work for smaller disks or shitloads of RAM, and really only benefits read performance.

Would I do this? Probably only to see if I could make it work, I doubt it is actually a useful solution. But, it popped into my head just now, and I had to get it out, just so my brain would stop thinking about it.

Thoughts?

@nuintari I've looked in to similar stuff with mixing SSDs and spinning rust for "fast read slow write" stuff, and the consensus seems to be "ZFS really doesn't like different disks in a vdev having different performance characteristics" because it won't account for that when scheduling read IO, so you'll get incredibly inconsistent performance.

(I'm not familiar with MySQL's innards as much as with postgres, but you'd think that if the box has more than enough RAM to fit the whole damn database as a ramdisk, you'd be able to tune it to operate more like redis, keeping the entire thing in memory so you never have to touch disk for reads... I know postgres can be fairly easily tuned to be very greedy with regards to RAM usage)

@becomethewaifu Yup, the original solution was absolutely terrible in so many ways. I too prefer PostGreSQL, but do know enough about MySQL that I could have fixed this right. But the MSP I was working for at the time had no spine for big, scary Linux stuff.

Yet another reason I hate working for MSPs.

@nuintari Indeed. I've heard enough stories that I'm glad I got in at [redacted], an incredibly boring corporate job where I don't have much in the way of sysadmin responsibilities, but our admins are generally Quite Competent, aside from sticking their heads in the dirt with regards to IPv6...

I seem to remember StackExchange posting something about how they eventually figured out as part of their performance tuning that it was faster to just give the database boxes a TB of RAM to fit the indexes in memory than it was to have a separate cache server, as it took the same amount of time to simply query the cache as it did to render the page 'from nothing' with an in-memory index...

@becomethewaifu I _want_ to be doing sysadmin/networkadmin duties, but I want to do them WELL.

In the current era of abject laziness and surrender to the AI gods, doing quality work is dead.

I'm considering changing professions. Modern IT is completely terrible. Another profession might also be terrible, but at least I'll be too green to know any better.