Proxmox VE SDN VXLAN Setup

Aug 7, 2024 · 17 mins read
Proxmox VE SDN VXLAN Setup

In the video below, we show you how to create a VXLAN zone in Proxmox VE


Proxmox VE is a very popular open source hypervisor

And since version 8.1, the SDN core packages are installed by default

Now there are various benefits to be had from Software Defined Networking, but one is the ability to create an overlay on top of the physical network

This allows you to create virtual networks, allowing virtual machines on different nodes in the cluster to talk to each other, without needing changes to the underlying physical network

Now that may not sound like much, unless you’ve worked in IT

But just adding a new VLAN for instance can require a lot of planning and preparation

Multiple changes will need to be made on several devices and if a mistake is made it can bring down the entire network

But even with automation, in large companies it might still be a while before these changes can be implemented due to change control

So in this video we go over how you can take advantage of SDN and setup a VXLAN zone in Proxmox VE

Useful links:
https://pve.proxmox.com/pve-docs/chapter-pvesdn.html https://en.wikipedia.org/wiki/Virtual_Extensible_LAN

Assumptions:
Now because this video is specifically about SDN, I’m going to assume you already have Proxmox VE installed

If not then I do have another video available which shows you how to do that

I’m also going to assume that you have a cluster built, because there’s no point setting up VLXAN Zones on a single server

If you don’t know how to setup a cluster, then there’s another video I made to help you with that

And lastly, I’m going to assume your servers are running at least version 8.1

But if you need to upgrade from version 7, then yes, I also have a video to show you how to do that

Prerequisites:
Now as mentioned in the intro, the SDN core packages get installed by default from version 8.1 onwards and these are fully supported

If you’ve upgraded from an older version though, you may need to install some additional software and make a configuration change

In any case, it’s always best to check that all of the servers are ready

First, we’ll make sure that the libpve-network-perl package is installed

To do this we want CLI access, so you can either SSH into the server or open a shell session from the web browser

Then we’ll make sure the repository cache is up to date and install the extra software if needed

apt update
apt install libpve-network-perl

Next, we need to make sure that the server will include files added to the /etc/network/interfaces.d folder

To do that we’ll edit the interfaces configuration file

nano /etc/network/interfaces

Make sure the following line is at the end of the file

source /etc/network/interfaces.d/*

If not, then append that line, then save and exit

The line is necessary because the SDN changes do not modify the main configuration file

Instead, one or more files will be added to that folder we’ve asked to be included

Now you’ll need to repeat this process for all of the PVE servers in the cluster

Creating A VXLAN Zone:
In Proxmox’s SDN solution, you first need to create a zone

Now not only can you create multiple zones, but you can also have different types of zones Simple
VLAN
QinQ
VXLAN
EVPN

Each of these offer different ways to create virtual networks and in this example we’ll setup a VXLAN zone, which involves using UDP tunnels to exchange traffic between PVE servers

So the server will take the VM traffic, bundle it into a tunnel to the relevant PVE server, which will then remove the traffic from the tunnel and pass it to the target VM

Now one benefit of this tunnelling method is you can have different PVE nodes in different subnets

And you could even have servers in different data centers, but the VMs will still have Layer 2 connectivity to each other

Do bear in mind though that even though the traffic is being placed into a tunnel, it isn’t being encrypted and that’s not good if the traffic is going over a 3rd party network or the Internet

Fortunately, Proxmox do offer an example in their documentation to show how you can encrypt this traffic using IPsec

Now from a design perspective, having the same Layer 2 network at more than one site isn’t actually ideal, but it is very useful if you ever need to migrate VMs to another site or run into a business recovery situation for instance

But what’s really useful here, is that you can create more and more of these virtual networks and it won’t require any further changes to the underlying physical network

Not only does that offer more agility for managing and deploying virtual machines, but it could also save on network capital and operating costs

Bear in mind, if the traffic has to pass through a firewall for instance, these PVE servers will need to be able to connect to each other on UDP port 4789

To add a new zone, we first need to navigate to Datacenter | SDN | Zones

Next click the Add drop down menu and for this example select VXLAN

For the ID, you’ll want to enter a useful name to describe this network, in this example I’ll call it Test

As part of the configuration, we need to provide an IP address list of the nodes involved, for example
192.168.102.10,192.168.102.11,192.168.102.12

In other words, the IP addresses of the server interfaces to connect to, in a comma separated format

It makes sense because servers typically have several interfaces and you’d want VM traffic to be sent over a 10Gb NIC for instance and not a 1Gb Management interface

Because this VXLAN method involves tunnelling traffic, you have to lower the MTU (Maximum Transmission Unit) when this traffic is sent out the physical interface

The reason being that today’s computers still talk to each using a protocol called Ethernet which goes back to when network interfaces were only capable of 10Mb/s

If a computer wants to send a file of say 500MB to another computer, it can’t send that in one exchange

Instead, the computer has to break the file down into chunks or more specifically Frames and the size limit for an Ethernet Frame, is typically 1518 bytes

However, 18 bytes is used by Ethernet itself leaving the computer with a limit of 1500 bytes per frame, the MTU

I don’t want to get into more details than that, other than to say that the computer can’t even send that amount of data due to additional overheads

Do bear in mind though that if you’re using 10Gb networking and above, then Jumbo Frames make more sense as the Frame size may be 9018 bytes with an MTU of 9000 bytes, but do check as it varies between hardware vendors

Now by default the GUI is showing the MTU as being set to auto, suggesting this will be automatically taken care of, but as others have reported issues in forums we’ll set this to 1450 as shown in the documentation example

In other words, we have to give up 50 bytes to allow for the tunnel overhead

Within the cluster, you can restrict the Zone to certain nodes if you’d prefer, but by default the Zone will be created on all of them

IPAM is beyond the scope of this video, and although by default we see IPAM set to use the built-in PVE IPAM solution, at the time of recording, we can’t make use of this for VXLAN zones

There is additional software that can be installed to provide the necessary DHCP server, but this only works for Simple zones

In addition, that part of SDN is still in tech preview and is not supported

As we’re setting up VXLAN zones, we’ll stick to using either static IP addressing or rely on a traditional DHCP server for now

If you click Advanced, you can set the DNS server, reverse DNS server and the DNS zone name. The servers in particular need to have been predefined, but this would allow you to define separate DNS servers for each zone

Again, this has no relevance in our case because we won’t be using IPAM or a DHCP service from the PVE servers

Once you’ve made your choices, click Add to create the Zone

TIP: Pay attention to the state which is shown in the far right column. At this stage it shows as new because we haven’t actually applied our changes yet

Creating a VNet:
In the classic network setup for Proxmox VE, each server will have a Linux or maybe OVS bridge that defines network connectivity

So when you create a virtual machine, you’ll give it a network interface that is attached to a Linux bridge as well as a VLAN tag if you’re isolating your computers using VLANs

In this SDN solution, however, we have VNets which are virtual networks and a Zone can have multiple VNets defined within it

To add a new VNet, we first need to navigate to Datacenter | SDN | VNets

Next, click the Add drop down menu and select Create

Bear in mind, you’re limited to 8 characters for the name, we’ll call this one network1

But you can give it a more descriptive meaning if you like in the Alias field, which is much longer

From the drop down menu you’ll need to select the Zone this belongs to

Now I haven’t found any clarity on what the VXLAN ID range is in Proxmox’s documentation, but Cisco documentation mentions anything from 1 to 16777214, so I’ll go with that as VLXAN is based on standard’s

For the Tag, or VXLAN ID, I’ll assign a value of 10000

NOTE: This has to be unique for each VNet

VLANs won’t be used in this example, so we’ll leave the VLAN Aware option unchecked, but the option is there

Once you’ve made your choices, click Create to create the VNet

TIP: As before, pay attention to the state. At this stage it shows as new because we haven’t actually applied our changes yet

Although DHCP isn’t available for VXLAN zones at the moment, it is worth taking a peek at what will likely be possible in the future

Basically, each VNet can have multiple subnets assigned to it

To create a subnet, you would select the VNet, then in the Subnets section click Create

The first thing to do would be to provide the network address for this subnet in CIDR notation, for example
192.168.50.0/24

Next we’d need to define the Gateway, which is basically the device that provides an exit point for the subnet, for instance
192.168.50.254

The use of SNAT (Source Network Address Translation) would make sense if PVE would be handling Internet access for instance and the source IP address of traffic from VMs would need to be changed. However, for me, I’d rather leave this to a firewall

Similar to the Zone configuration, we can define different DNS zones for each subnet

The next step would be to define the DHCP range for this subnet

To define the IP addressing for the subnet, you would need to click on the DHCP Ranges tab

Here you would click Add and for the Start enter 192.168.50.100 and for the end 192.168.50.199 for instance

NOTE: At first glance it would have been good if we only had to enter the last octet, so 100 and 199. But if our subnet had been 192.168.50.0/23, things would get complicated I guess for the developers

Now, this solution does support multiple DHCP ranges, but I prefer to have a single range unless maybe you’re carrying out an IP migration for instance

Once you’ve made your choices, you would click Create

Although this won’t work for us, we can still leave it as a possible future reference

TIP: Again, pay attention to the state because changes need to be applied

Applying Changes:
As I’ve been hinting at along the way, nothing of what we’ve done has made any difference yet

That’s because for this SDN solution, you have to commit your changes

To do that, navigate to Datacenter | SDN

Now click the Apply button

You should see several tasks then being created and run on each node. Once they’re completed, there should be a new network Zone available on each one

Assuming you don’t get any errors, the new virtual network should be available for use

Attach VM to VNet:
When you look at a node in the GUI, you’ll typically see a Zone called localnetwork below it

This represents the classic or physical network that is installed by default and it will likely have a Linux Bridge like vmbr0

Because we’ve applied our SDN changes, we’ve now got a new Zone called Test and within that is a network that I called network1

To attach a VM to this network we first need to click on the VM, then navigate to Hardware

Next we select its Network Device, click Edit and in this case I’ll change the Bridge to network1

Any VLAN Tags will be deleted for this example as we aren’t using VLANs

While we’re here, we also need to change the MTU

To do that, first click Advanced

The MTU for the vNIC needs to match the MTU we set for the Zone, in our case 1450

While we could enter that value in the field, if the vNIC is using the VirtIO driver, you can set this to 1 instead and the vNIC will inherit the MTU from the network it’s attached to

As before, we don’t have an IPAM solution for this example

So as long as we have a DHCP server or Relay Agent on this new network, our VM should be assigned an IP address

In Linux, a quick way to check if an IP address has been assigned by DHCP is to run this command
ip a

While we’re here we should also double check that the MTU for the interface is 1450 instead of the usual 1500

Testing:
For this test we’ll have two VMs, each running on a different node, but connected to the same VNet

The easiest way to test connectivity is to run a ping command, for example

ping 192.168.1.102

And we can also check the MTU limitation by running a command like this

ping 192.168.1.102 -M do -s 1422

With these extra parameters we’re blocking fragmentation and setting a size so we don’t go above the MTU of 1450 bytes

TIP: We can’t send 1450 bytes of data as we lose 20 bytes to IP and 8 bytes to UDP when using the ping command :(

So when we run this command it should fail

ping 192.168.1.102 -M do -s 1423

In other words, the computer is blocked from breaking the traffic into smaller Frames and as we’re exceeding the 1450 byte MTU limit, the traffic can’t be sent

Another thing we need to test is if traffic is actually being tunnelled

What we don’t want is for the traffic from a VM to be sent out a physical interface as is because then there would be no separation of traffic on the physical network

That would be no different to putting computers into different subnets, but within the same VLAN

First we’ll run a tcpdump session on one of the nodes to look for any traffic in the new network range

tcpdump net 192.168.1.0/24

Then we’ll run our ping test from our VM on the other node

Now we shouldn’t see any traffic because it should be going into a VXLAN tunnel

Assuming that works fine, we’ll stop the session and this time monitor for traffic using UDP port 4789

tcpdump udp port 4789

We should see the ping traffic going back and forth, but within a VXLAN tunnel

One thing I’ve noticed mind is that the servers listen on all interfaces for port 4789 traffic

ss -tunlp | grep 4789

So even though we’ve picked out specific IP addressing for the Zone, the server doesn’t restrict itself to a matching IP address

All the more reason to configure the built-in firewall solution to lock down access

The big question now is, do we get isolation between VNets?

Even if computers are in the same IP range, they shouldn’t be able to talk directly to computers in another VNet

Well to test that out I created another VNet, within the same Zone, and attached another VM to it

The first thing I noticed was that the computer had no IP address, which is a good sign because the DHCP server is in a different VNet and so that immediately suggests these VNets are indeed separated

And after assigning this computer a static IP address, sure enough it couldn’t reach the other two VMs, so we do have separation between VNets

Physical Network Connectivity:
Having virtual networks is all well and good when the traffic is between virtual machines, but at some point these VMs will need to reach devices on the physical network and probably even the Internet

The easiest option is to use a virtual firewall with an interface attached to the vmbr0 Linux bridge for instance and others in various VNets

This is just the same as the classic network solution, the firewall will act as a gateway between subnets, whether virtual or physical

But what this means is that if there are only VMs in subnets, you can setup VNets instead of VLANs and you don’t need to do anything on the physical network switches

Now, if you have virtual and physical devices in the same subnet, then you could stick with using a Linux Bridge for that, or maybe you could consider setting up VLAN Zones which take advantage of a Linux Bridge

Summary:
Hopefully you’ll have seen that VXLAN Zones are relatively simple to setup

We should expect more capabilities in the future, but at this stage we don’t have access to the in-built DHCP server or IPAM solution for VXLAN Zones

Having said that, we also have the potential of using 3rd party options like NetBox and PowerDNS

But even just a basic setup like this allows you to simplify the underlying network design, allowing you to be much more agile when it comes to managing virtual devices in Proxmox VE

Sharing is caring!