I've switched from #9pfs to #virtiofs mount for a VM I run under my #proxmox host at home and did a few #fio benchmarks.
9pfs (before):
seq read (1M, 1 thread) - 1950 MiB/s
rand read (4K, 4 threads) - 213 MiB/s
seq write (1M, 1 thread) - 468 MiB/s
rand write (4K, 4 threads) - 80 MiB/s
virtiofs (after):
seq read (1M, 1 thread) - 5258 MiB/s
rand read (4K, 4 threads) - 291 MiB/s
seq write (1M, 1 thread) - 1074 MiB/s
rand write (4K, 4 threads) - 84 MiB/s
2.7x sequential read, 2.3x sequential write, 1.3x random read, and 1.05x random write.
Pretty good improvements!
Especially that this is for my media (movies/tv) mount where mostly sequential I/O is typically performed.
Note the underlying hardware is Samsung QVO SATA SSD, which is rather slow SSD, and likely a bottleneck in those random read/write tests. I expect there would be much bigger differences on a fast NVMe drive.
The July 16th, 2024 Jail/Zones Production User Call is up:
We did a #9pfs deep dive, discussed example #CVEs, got a #Jailer update and hacked on it, discussed #VxLAN over #WireGuard and #IPsec, and more!
"Don't forget to slam those Like and Subscribe buttons."
FOR THE GLORY OF THE REPUBLIC!
https://www.qemu.org/2022/12/14/qemu-7-2-0/
#9pfs : Massive general performance improvement somewhere between factor 6 .. 12.
Badly needed, tried 9p once, switched to nfs because of the really low performance. Will give it a try after update to 7.2.x