Recovering from a failed Proxmox upgrade
Today I'll look at how I recovered from a failed Proxmox 8 to 9 upgrade on a small form factor Dell that I use. This system is one of two in the cluster, and it failed during the upgrade process. This failure was not caught by the pve8to9 utility, which gave me a clean report prior to the upgrade. However, during the last few steps of the u
https://dustinrue.com/2025/12/recovering-from-a-failed-proxmox-upgrade/
Recovering from a failed Proxmox upgrade
Today I’ll look at how I recovered from a failed Proxmox 8 to 9 upgrade on a small form factor Dell that I use. This system is one of two in the cluster, and it failed during the upgrade process. This failure was not caught by the pve8to9 utility, which gave me a clean report prior to the upgrade. However, during the last few steps of the upgrade, I got the message below, which made me think I […]
https://dustinrue.com/2025/12/recovering-from-a-failed-proxmox-upgrade/
. . . sigh . . . Some times the RAM fairies get tired I guess. I get it, what is #shit is when humans lie . . . and hide this fact, and onsell them on as a component in a 'tested' 'working' thing, that fails to boot every 7th time.
#SystemRescueCD and #memtest86 are awesome by the way.
I've seen reports of people using #clonezila on #LVM volumes, but I wasn't so lucky.
I ended using a combination of #systemrescuecd, #partclone and #ssh to get maximum transfer speed between both computers.
The same technique many people use with #tar.
partclone.ext4 --clone --source <partition> | ssh <receiving_IP> 'partclone.ext4 --restore -o <dest_partition>'
Just remember to disable the `iptables` unit and setting systemrescuecd's `passwd` accordingly on the receiving computer.