Currently patching my Proxmox cluster to prep the experimental SDN functionality to enable me to do VXLAN across my cluster of nodes.

I want to test this so I can have virtual routers with devices on the same internal networks but spread across multiple physical nodes.

I'm familiar with doing this on VMware with dVSes and VLANs but trying to replicate it on Proxmox. If this doesn't work as expected I may end up trying some other options. Hope to solve this in software so I don't have to buy gear.

#homelab #proxmox #networking #sdn #vxlan #ovs #selfhosted #selfhosting

Ok quick update: I got Proxmox SDN working with VXLAN and Vnets across the cluster!

To reproduce:

1. Install SDN per instructions (about three easy steps per node). See docs: https://pve.proxmox.com/wiki/Software_Defined_Network
2. Add a Zone at the SDN datacenter level. Specify Zone name and Prox nodes to apply to.
3. Add a Vnet at the SDN datacenter level. Specify zone, Vnet name, and VXLAN ID.
4. Apply SDN configuration, this pushes the Vnet config to each Prox node.
5. Add/replace interface on target VM. In my case for testing I added an interface targeting the new Vnet and specified IPv4 statics on two VMs on separate prox nodes and pinged each other.

@zrail @r3pek @junq

#homelab #proxmox #networking #sdn #vxlan #ovs #selfhosted #selfhosting

Software Defined Network - Proxmox VE

The SDN / VXLAN Proxmox saga continues...

After posting this I noticed some strange behavior. I was getting ping packets fine and nmap was showing the https service for my new firewall. The problem was when I would navigate to my new firewall's management site it wouldn't work. I would get ssl_error_rx_record_too_long on Firefox and timeouts on Chrome.

I opened up Wireshark and noticed the return traffic for SSL was severely delayed and appeared malformed.

What I missed in my instructions is that VXLAN takes up 50 bytes for encapsulation, so for the endpoints within the internal network I had to set a custom MTU of 1450 so that the VXLAN encapsulation could happen within the 1500 limit of the interface on the Proxmox nodes.

After configuring this on one of the internal machines and it worked to get to the site I thought maybe I had to also configure the 1450 MTU on the firewall's internal interface. I did that and was immediately getting some rapid drop-connect-and-drops repeatedly on my home network so I reverted that change. I really don't know why changing the MTU on the internal interface of the firewall would cause that on my main network but it did, so I reverted it. It seems any device on that internal LAN will need the MTU change other than the firewall, for all the traffic to work properly.

Now it looks like the next thing to do is to start putting various machines behind the new routers to start segmenting my lab network, and get it off of the flat network for increased security and traffic isolation and control.

The Proxmox guide I linked earlier will give more details on the 50 byte allocation for VXLAN.

@zrail @r3pek @junq @train

#homelab #proxmox #networking #sdn #vxlan #ovs #selfhosted #selfhostingmastodon #MTU

@projectdp @zrail @r3pek @junq
Aye!! Just saw this!! Good on you!
@projectdp wait, it isn't possible now? all bridges are local (to the node) ?

@r3pek

I thought the bridges were local to the node, I don't think they are automatically spanned across your cluster is it?

@projectdp still a one-node-cluster... but thinking about expanding it 😜
So i'm interested in listening your feedback on it 😉

@r3pek

From what I tried previously the bridges are not spanned across the cluster so you do need to implement some other mechanism to have a dvs (distributed virtual switch) equivalent.

What I'm seeing right now after adding the SDN piece is the ability to add Zones and VNets. The VNets will distribute the virtual networks (bridges) to each of the nodes.

Going to try setting this up now!

@projectdp is that on some beta channel? good to hear that they're working on it.

@r3pek

Nope it's available on the non-enterprise repositories but they state in the documentation that it's experimental so there may be some issues with it but I think for lab purposes this is working so far!

@projectdp @r3pek bridges are local but then you could always add a tagged physical interface to every bridge and then have vlans that span through the cluster. But you'd have to configure your physical switch to know about every such vlan.

With VXLAN you can have a flat no-vlan network between the nodes, and then only use software to configure a complex network.

Another benefit - VXLAN connects across the nodes even in different data centers as long as you have any (routed, firewalled, wan) connectivity between your hypervisors.

@junq @r3pek

Yeah the problem with that first paragraph for me is I don't currently have a VLAN capable switch behind my proxmox nodes, I could place one there but I'd have to physically re-network a bunch of stuff at home.

I do like those other aspects of VXLAN that you mentioned as well. Are you currently running VXLAN or VLANs with your setup?

@projectdp @r3pek I'm running just 2 vlans in my small home proxmox cluster (int, ext) with openwrt based home router acting as a smart switch. But I've seen vxlans widely used with Docker and k8s for their flexibility and also to not depend on the underlying network setup. Glad this came to proxmox now.

@junq @r3pek

Ah I see, I do want to redo much of my network with VLANs but I'll need to re-cable much of my switching gear and probably buy another switch.

I would really like to hear from some people who have done production k8s deployments who also have a good networking background to talk to so I can understand the range of possibilities of well-architected networks in these environments. When I look at docker networking where you have a host and containers, or a cluster of hosts with containers, the network abstraction seems to be a confusing mess.

@projectdp @junq

Before I setup this homelab, I started with refacturing the home network. Bought 2 AX3600 routers and put OpenWRT on them for switching (and main router) on steroids. Just for the sake of it, I added a couple of Unifi ToughSwitches (they changed its name now) with OpenWRT too in them.