Proxmox with OPNSense setup on dedicated server

By | 8th January 2022

I needed a dedicated server for running a few VMs. I decided to get a server in Montreal from Kimsufi. This guide should also work with other dedicated server providers, including sister companies of Kimsufi like OVH and SoYouStart. This is a write-up on how to set it up. These are some things that I needed in a server.

  • Virtual router to separate public/private networks.
  • ZFS file system for reliability, snapshots.
  • Additional IPv4 and IPv6 addresses.
  • VPN to access private subnets.
  • Setup to torrent with rutorrent, NFS, Plex.

I decided to use Proxmox as my virtualisation distribution and OPNSense for my virtual router. This is not an in-depth guide, but should provide enough pointers on how to go about things.

Intial setup

I decided to go with a server with Intel W3520 CPU, 32GB DDR3 ECC RAM and 2x2TB SATA disks. Kimsufi didn’t have a template for installing Proxmox with ZFS. There are a few ways to work around this, including installing with the recovery mode. After exhausting most options, I was forced to settle for installing Debian on root partition with EXT4 soft RAID.

Kimsufi Debian install template configuration

As shown in the image, make sure you use minimal space for the Proxmox OS. You will then have to create partitions and create a ZFS array.

The ZFS array should not be visible and available for use in the Proxmox GUI. Make sure you add the subvolumes for use with Proxmox.

ZFS directory configuration in Proxmox GUI

Networking

For the networking setup, we will need the following:

  • 1x private IPv4 subnet
  • 1x publicly routable IPv4 subnet(externally supplied)
  • 1x private IPv6 subnet(externally supplied)
  • 1x public IPv6 subnet(externally supplied)
  • Zerotier VPN to access private subnets
  • OPNSense virtual router on VM for routing
  • VLAN aware OpenVSwitch

I have made a diagram to better illustrate my setup. Because Kimsufi gives me a limited number of IPs to work with, I decided to use services of FreeRangeCloud, which gave me a /29 IPv4 address block and /48 IPv6 addresses to work with. These are routed to me over a Wireguard tunnel.

OpenVSwitch configuration diagram

Host setup

You can do most of the host network setup using the Proxmox webUI. My /etc/network/interfaces ended up looking like this.

I also had to run extra commands on every reboot. You can use cron for this. This will NAT the private network used for VLAN 100, destination NAT(port forward) for Wireguard and Zerotier VPN, add route to access Proxmox host over Zerotier and proxy NDP for IPv6 used on VLAN 100. The default route is to make sure the private IP of the Proxmox host is available over Zerotier VPN configured later.

OPNSense setup

You will need to install OPNSense or any other routing software of your choice on a VM. Mikrotik RouterOS, VyOS, PfSense are also solid choices. Easiest way to access the webUI would be to setup an another VM and use the browser.

Setting up interfaces should be straightforward. Mine looked like this. You may need to setup firewall rules to secure everything.

OPNSense interface configuration

Wireguard tunnel

Next, I had to configure Wireguard tunnel with the details given to me by FreeRangeCloud to get additional IP addresses. In addition, I had to create my Wireguard public/private key pair and send it to them.

My setup on OPNSense looked like this. I also had to add the tunnel to the interfaces list.

FreeRangeCloud Wireguard VPN endpoint
FreeRangeCloud Wireguard VPN local configuration

Zerotier

Zerotier has a plugin with OPNSense and configuration is easy. My configuration looks like this. I made sure to route the private subnets to the OPNSense host. This allows me to easily access all the private VMs.

Zerotier VPN configuration

MTU issue

One of the issues I had was that services would fail to work properly over public IPs. I managed to diagnose this as a MTU issue. To accommodate IPv6 packets going over Wireguard tunnel, the MTU needs to be set to 1420. This can be fixed on OPNSense side with setting TCP MSS clamping on the Wireguard interface.

MTU+TCP MSS settings on OPNSense Wireguard interface

The other place this can be set is in the VM interface or LXC settings(can be set in /etc/pve/lxc/100.conf, replace 100 with container ID).

One last thing to take care of is to make sure all services like Proxmox/OPNSense webUI are listening on internal IP addresses.

Torrenting setup

For torrenting, I set up 3 containers – for Rutorrent for torrenting and seeding, Plex and NFS for file sharing. The two challenging parts here were:

  • Mounting ZFS directory to multiple LXC containers
  • Set up NFS file server on LXC container

Mounting ZFS directory on LXC

Easiest way to do this is to use privileged containers with bind mount points. You can try mapping UIDs with unprivileged containers, but it did not work very well for me. Firstly, create a privileged LXC container.

LXC container configuration

After that, in container settings in /etc/pve/lxc/108.conf, you can add in the bind mount.

That should make it work. Make sure the folder is readable/writable by the user you are planning to use.

NFS file server on LXC container

NFS server does not run on LXC containers by default. I had to create a privileged container and perform the following steps on the host.

Setup on guest was as follows.

Useful links

Leave a Reply