Richard Bucker

OpenBSD vmm Gateway

Posted at — Apr 22, 2021

UGH! There are those days when I feel like Charlie Brown trying to kick a fieldgoal while Lucy is holding… and I know it’s going to go wrong. In summary Things are going wonky for VMware and I just do not know if they are going to make it. Then there is licensing and support which can be expensive and useless for the price (I have some stories about a large commercial DB vendor). And then there is the vulnerability and misuse of Docker Hub and the needless complexity of Kubernetes.

The lesson to take away from VMware is that VMs have their place. And the lesson to be taken from Docker is that containers have their place too. What they have in common is orchestration. And orchestration is king. Let’s be cleat I do not think anything bad is going to happen to VMware but the future is uncertain and given the pricing I’d prefer something else.

I am not discussing AWS, Google, DigitalOcean etc here because they are just another tool with APIs.

The project

Someone coined the phrase “cattle not pets” and while that makes some sense I believe in chickens. Just consider the input and output requirement differences between chickens and cows.

So in the references there is a document where Digital Ocean describes a VPC Gateway. Essentially there is one machine with a public IP that acts as a reverse proxy to an appserver running in the backend and only accessable via a private network. My goal for this post is to reproduce that project using OpenBSD 6.9 as it has all the prerequisite features. The other 3 references are just that. The following is not exactly a detailed explanation but the staps I performed to complete the task.

The setup

Install OpenBSD onto a bare metal machine. I installed a few packages just to make things easier. My hardware is an Intel NUC i7 with 32G RAM and one ethernet port so I added a usb/ethernet adapter. Unfortunately I’m not being recognized after a reboot but there is a workaround for the timebeing.

The two ethernet ports are required because OpenBSD’s vmm seems to complain when the host port and the VMs both use DHCP. The host will intercept the dhcp response and then all things are bad.

Download the installXX.iso where XX is the version; from any of the legit sources.

When creating the VM disk image you’ll need to store it some place. This is subjective and depends on which owner owns the image and how it’s configured.

Phase 1 install the OS on the VMs

on the host

on the VM

At this point you can HALT the installer; copy the img as many times as needed. In this case just one more in order to have one GW and one BE server. If both VMs are running then they should be able to ping and communicate via the local (-L) network.

At this point the VMs can communicate and they can connect to the internet. The challenge is that the public internet cannot connect to the gateway.

on the host


Configuration files


# internet gateway and backend server

switch "gw_switch" {
    interface bridge0

# vmctl create -s 50G disk2.qcow2
# vmctl start -m 1G -L -i 1 -n gw_switch -r install68.iso -d disk2.qcow2 gw
vm "gw" {
    memory 1G
    disk "/home/rbucker/disk2.qcow2"
    local interface
    interface { switch "gw_switch" }
    interfaces 1

# vmctl create -s 50G disk1.qcow2
# vmctl start -m 1G -L -i 1 -r installXX.iso -d disk1.qcow2 vm1
vm "vm1" {
    memory 1G
    disk /home/rbucker/disk1.qcow2
    local interface
    interfaces 1
    #interface vio0

/etc/pf.conf (just the last 2 lines and the dns_server)

#       $OpenBSD: pf.conf,v 1.55 2017/12/03 20:40:04 sthen Exp $
# See pf.conf(5) and /etc/examples/pf.conf

dns_server = ""
set skip on lo

block return    # block stateless traffic
pass            # establish keep-state

# By default, do not permit remote connections to X11
block return in on ! lo0 proto tcp to port 6000:6010

# Port build user does not need network
block return out log proto {tcp udp} user _pbuild

match out on egress from to any nat-to (egress)
pass in proto { udp tcp } from to any port domain \
        rdr-to $dns_server port domain


Now the GW can receive requests and proxy to the backend. It is possible to have a third tier and let’s call that the DB server. That would be a separate switch with an isolated private network. Lastly it’s time to install the applications. Install haproxy on the gateway and whatever the app is on the backend server. (install the let’s-encrypt daemon on the GW).

Next steps

I need to turn this into a script, however, I need to be able to configure the OS installation and that’s not working well. (see autoinstall) It might also be beneficial to deploy a second haproxy in an HA/carp configuration… or maybe not. Install the apps.