After a long hiatus from writing blog posts, we are I am back at it! I finally found the (monetary) motivation to fix this blog up and move it to a new, lower-cost home (while of course, first spending a bunch of money doing so). Ultimately, the savings will be well worth it and what better way to come back to blogging than to talk about how I accomplished this move.
Historically, this blog has always been hosted on a rented VPS. I recently saw the bill for the latest month and decided that it was time to bite the bullet and move this blog to cut down on needless cost that was delivering a half-broken, sometimes fully broken, unmaintained blog.
Things have come a long way since I first started hosting this blog. Initially, my main drivers for using a VPS were:
- Network connectivity – with my paltry 30 Mbps upload speed, I was not about to waste it on hosting content from home
- Security – I didn’t want to expose this blog directly from my home IP to the internet
So, what’s changed since then and how do we address these issues? Technology has advanced significantly in the last few years, especially in the private/reverse tunnel tech, also I’ve since gotten better networking equipment, better internet, and more experience running services within a secure environment.
Let’s start with a simplified overview of what the configuration looks like now:
Egress Networking
With this updated architecture, the system is isolated into its own dedicated, isolated LAN. This LAN sits firewalled away from all other traffic. Additionally, the system is configured with routing rules that ensure all internet-bound traffic from the DMZ LAN is sent through a Wireguard VPN tunnel. Even if a compromise were to happen, traffic would not be able to originate from my home network IP range, instead exiting via a VPN exit-node IP address.
Ingress Networking
For readers like you, to provide access to this blog, I’ve leveraged Cloudflare’s CF Tunnels to provide a reverse-proxy into the DMZ LAN. This allows me to expose specific services hosted on the Raspberry Pi without opening a listening port to the world. For added convenience, Cloudflare also supports TLS out of the box – no more needing to remember to renew certs.
Additionally, in this design, I’ve deployed a Tailscale VPN so that I can use an overlay network to access the Raspberry Pi sitting in the DMZ LAN. This ensures that I don’t need to have additional firewall rules that poke holes from the DMZ LAN to any of the other VLANs. Instead, when I need access to the host, I can just sign on to the VPN via SSH. Additionally, instead of having to deploy a public key to each host, access is governed via Tailscale SSH, allowing me to dynamically write ACLs that allow access to the hosts.
OS Provisioning
I wasted more time than I am willing to admit on this area. Wanting to make things easily reproducible, I started with a simple goal: use cloud-init to provision the node running Ubuntu 24.02 LTS. What I ended up with was headache after headache with how cloud-init functions and the limitations with how cloud-init interacts with a Raspberry Pi.
The issues boil down to one simple thing: cloud-init was designed for the cloud, and Raspberry Pi’s are definitely not the cloud. Most security underpinnings today are based upon the concept of time, especially when it comes to CA certificates and other security signatures. This brings us to an interesting issue, with a brand new Raspberry Pi in hand, there is no hardware clock to seed time to the operating system. What this translates to effectively is that most things like setting up new apt repositories, or fetching the signing key for packages, or making any sort of TLS-enabled cURL call will fail. Due to the way cloud-init functions, unless you have a RTC clock installed (which, with the Raspberry Pi 5 you now can, but you’d still need to figure out a way to initialize this clock), you will find that cloud-init falls over itself endlessly.
This ultimately took significant time to debug, largely because you’d think it should work, so I spent hours searching for a workaround, only to arrive at the conclusion that it wasn’t possible to achieve within the confines of cloud-init’s stages. Ultimately, the ugly solution is to basically cannibalize the usage of cloud-init. I effectively wrote my own bash script to run at the end of the cloud-init cycle, after the system’s clock has successfully synced with an NTP server, which does all of the fetching keys, installing packages, and setting up services.
Docker + Docker Compose
Now that the system is finally booted and connected to the Tailscale network, it’s time to setup some services! Now you may be wondering, why only Docker and Docker Compose? Why not Kubernetes? While Kubernetes is powerful, there isn’t much need currently for leveraging Kubernetes in this environment and it adds an extra level of abstraction that currently isn’t needed for my setup and likewise consumes precious resources that are limited with the Raspberry Pi-based environment.
I leverage a simple Docker compose file which sets up the necessary containers to compose together this WordPress site. The site runs as a container bound to the local host. This is then connected in with the Cloudflared service running on the host and allows a reverse-proxy to serve content from the blog directly to the internet.
Next Steps
Now that the blog is functional again, there’s a few housekeeping tasks remaining – top one being backups. With the Raspberry Pi on the DMZ LAN, there is no access to internal backup systems. Instead, I will look at enabling S3-based backups to my existing backup cloud storage provider to enable periodic backups of this blog.
Leave a Reply