I've been meaning to revisit running iSCSI multipathing in my Proxmox cluster. I previously had it set up with a TrueNAS machine providing storage but I'm now utilizing a Synology. During the initial cluster configuration I attempted the iSCSI config but failed to get it working across all cluster nodes.
Instead for several months I decided to go with NFS since it has the most options to store Proxmox data (Disk image, Container template, Container, Snippets, VZDump backup file, ISO image) whereas the iSCSI + LVM option has more limits (Disk image, Container).
I finally revisited this and was able to get iSCSI, Multipath, and the LVM overlay working. I think I have missed one concept though and it's an important one that I need to validate. Before I get there I wanted to share the config items:
1. Synology: Set up Storage Volume
2. Synology: Set up LUN
3. Synology: Generate iSCSI iqn
4. Synology: Add Cluster host iqn's of cluster machines
5. Proxmox: Add iSCSI target and iqn at the cluster level
6. Proxmox: Add iSCSI target 2 at the cluster level
7. Proxmox shell: Install open-iscsi and multipath-tools if you haven't already
8. Proxmox shell: verify wwid's of the newly generated /dev/sdb, /dev/sdc (example disk names) ensuring that the wwid's match and are the correct iSCSI targets.
9. Proxmox shell: Configure /etc/multipath.conf to match your storage device, including denying the multipath management of all devices except for the explicit wwid if your iSCSI devices.
10. Proxmox shell: Restart multipathd. Once the multipath alias device appears you will be able to see it as an LVM Physical Volume (PV) with pvdisplay.
11. Proxmox shell: You may now generate an LVM Volume Group (VG) which will appear across the whole cluster.
12. Proxmox: You can now add an LVM overlay at the cluster level by selecting your new Volume Group.
Now I'm able to use my iSCSI-backed LVM volume across all clustered nodes for HA of VMs and Containers.
#homelab #selfhosted #proxmox #synology #iscsi #cluster #storage