This is a disk I/O report for the last 30 days of every #Proxmox node of a cluster. Something happened around March 28 that caused high disk usage, and I can’t figure out what. The replication tasks are failing randomly, and actually, any disk operations are slowed down. Also, there are no significant changes in CPU, RAM, or network usage.
I was hoping to find which LXCs are causing this, but they all have similar disk IO graphs.
Well, shit.
#homelab #ProxmoxCluster #HighDiskUsage #zfs #mystery

Ok, that was kinda premature and stupid panic.

#Beszel agent was not reporting Disk I/O until the recent update. All agents are updating automatically, and that happened just around March 28-30. After that, the data began to flow.

In fact, I have sudden disk issues only with a single #Proxmox node, and it is clearly visible on the IO pressure stall graph. Spikes before May 30 are backups. Then it went crazy.

#homelab #zfs

So I moved all LXCs to another #Proxmox node. The problematic drive’s usage has returned to normal, but another node where containers were moved is still fine.

Looks like the issue is in a WD Green SSD on that node. It took 1429 hours to retire it.

I'm running #diskscan on it right now. Have no idea why, because I still will replace it with a spare NVMe drive I have.

#homelab #Proxmox #sdd #nvme #drive

@yehor If you are running ZFS on a consumer SSD like WD Green it can wear out extremely quickly, maybe that's what happened?
@woof yeah, I think so
@yehor I went through a pair of consumer SSDs in zmirror awhile back, just used up the 500TB write capacity in only a few years. Now they've been replaced with Samsung SM863 enterprise SSDs, they have 6.2PB write capacity so should last a long time.