So, I was thinking about this batshite solution I encountered a few years ago. An industrial fab facility had a SAP server, big honking beast of a machine, 256 GB of RAM, and this was 2019, so yeah, money was spent.

But the database (mysql) was ostensibly, too slow. Their SAP consultants solution?

On boot, the system would take 128 GB of RAM, and create a ramdisk mounted on /mysql. It would then copy /var/lib/mysql, or wherever MySQL was stored into /mysql, and start the mysqld with flags to use the ramdisk.

Yes, this was absolutely insane, and tremendously fragile. But I digress. Every night at 0300, the mysqld was shutdown, and /mysql was rsynced to permanent storage. So, if they lost power in the middle of the day they lost potentially hours of work that had to be manually re-entered.

The punchline to this joke is that backing up the ramdisk to disk took about 40-45 minutes. The UPS backing this server had about half that in runtime. So, it never did anything even remotely useful. In fact, it sometimes lead to some terrible corruptions that had to be painstakingly repaired. Yes, they did have a script that was triggered by the UPS going on battery. It never finished on time.

Anyways...... My brain was thinking about this, and decided..... CAN WE MAKE THIS WORK?!?!?!?!!111???/one?!//111

I think we can, but I would like to state that this is insane, and probably _really_ expensive for fairly little gain.

But hear me out. System boots from a single disk ZFS stripe, creates a RAMDISK and adds it to the zpool as a mirror. Obviously, this would only work for smaller disks or shitloads of RAM, and really only benefits read performance.

Would I do this? Probably only to see if I could make it work, I doubt it is actually a useful solution. But, it popped into my head just now, and I had to get it out, just so my brain would stop thinking about it.

Thoughts?

@nuintari As a curious aside, Linux MD RAID has the concept of a “write-mostly” member, which mostly doesn’t serve any read I/Os issued to the array, only writes. (Unless the array becomes degraded.)
Obviously traditional RAID has other issues…
With ZFS I guess you could just clone the persistent pool in the RAM disk, then use snapshots to periodically transfer the changes back to the persistent storage pool?

@pmdj I mean, this was a pointless mental exercise whose primary goal was to get this awful idea out of my head.

Clone to RAM + regular snapshots is just the same issue as the original shit solution, done more often. Given the original problem was sitting 100GB in RAMDISK, periodic snapshots would probably be slower than just using the SAS disk under the hood.

@nuintari That really depends on the access patterns. For mostly tiny writes, I’d expect the period sync to clearly outperform the raw disk based approach. For mostly-reading loads you’d of course be better off with a large (ARC) cache.

@pmdj Yeah, in short, this is a terrible solution. There is almost always a better way than such weak ass hacks.

It wasn't even a real mental exercise, it was an effort to get a dumbass idea out of my head.