The Story So Far
As most of you know, I’ve been a big proponent of self-hosting services for one’s self and one’s closest friends and relatives for, well, as long as I’ve been connected to the internet. At various points over the last thirty years I have hosted things in my home, at my place of work, and in the cloud, back and forth and back again, depending on what was convenient and cost effective and satisfying. Right now, most of my self-hosted services are running in the cloud, the least satisfying option, but the convenience was far better than not getting them set back up at all after moving to the east coast and rural life, so the cloud served its purpose.
But now, it’s time to bring things back home.
Moving Back In-House
Self-hosting various internet-facing services can be challenging for even long-time systems administrators and operators, and a lot of what defines how challenging it will be in any given instance is outside the direct control of the people setting it up. In my current situation, one of the challenges will be the local ISP that provides service to our house.
They’re a small rural ISP that started life as a small rural telephone exchange that was family owned and operated and they’re still family owned and operated by the same family all this time on. They’ve done stellar work at keeping up with the changes in the telco and telecom landscape over the years, offering DSL, then cable modems and cable TV packages, and now also offering fiber internet to various chunks of the north Georgia mountains. A good friend has their top-tier offering, symmetric two gigabit fiber, way out in the woods on the side of a mountain that this time last year boasted five megabit downstream over 512 kilobit upstream DSL lines, if the weather was perfect and it hadn’t rained lately and the animals hadn’t chewed through a line again.
They aren’t perfect, though. When I inquired about the availability and pricing to get a static IP subnet assigned to our residential cable modem connection (we can see the fiber across the state road our neighborhood adjoins, but it’s yet to make it to the poles on our side of the highway) they responded that yes, they offer static IP addresses, for $30/month. For one static IPv4 address. Monthly.
So, that’s not great, and not something I’m willing to pay. While we’re on the topic of IP addressing, another imperfection is a lack of IPv6 support. What could be an easier on-ramp to self-hosting some (IPv6 only) services is not an option because the ISP just doesn’t support IPv6 yet, in 2024. I’m going to end up annoying them on this one, but I want to get some things moved in-house sooner than I expect them to make any changes.
Despite these challenges, we’re still moving almost everything back to in-house hardware, on our 750/35mbps cable modem connection. I have a plan, and it seems to be working out so far. Let’s take a look at the solution.
What Can I Move In-House?
Right now, I am operating about a half-dozen cloud servers, providing me with email, XMPP, and web hosting for various services for myself, more for my wife, and a few public facing projects like Brutaldon and Linernotes Club. Across them, they cost a couple hundred dollars a month in the cloud. Some of them are overprovisioned because money was less expensive than time when they were set up. Now that I am underemployed money is expensive again and my time a little more flexible, so I would love to pull as much in-house as possible.
But I don’t necessarily want to plop a box in the single DMZ slot on my retail residential router and open all the necessary ports to the entire internet at large all the time. It can be fine! It can also go horribly wrong, and again, all the pieces of that equation aren’t in my hands or yours. I also don’t want to build out a nice, open source router at this stage – while it’d be better than netgear’s crud, it’d be underutilized and overly complex. Once I can have an actual subnet hosted behind it, yes, I’ll put a real router in.
The plan is to use a small cloud server at minimal cost to proxy the necessary services hosted on gear at our house without losing any current functionality or breaking any current applications. To make it all as seamless as possible, we’re going to lean on some of the learnings from a project I hosted and talked about on the fediverse called simply The Tubes.
In that project (which may see a v2 return in coming months…stay tuned!), I set up an overlay network using wireguard and invited others to add their own network segments to the overlay network in a shared virtual LAN space. Operators connected to The Tubes hosted web services, game servers, DNS servers, file servers, all on a network only accessible by other operators, so it was possible to experiment with semi-private spaces using the public internet as a transit layer.
This time around, I’ll be leveraging a different overlay network technology, ZeroTier, but many of the lessons learned from The Tubes are informing my solution this time around.
The Pieces of My Solution
A Small Cloud Server
As I mentioned above, we’ll still have one virtual server in the cloud moving forward, acting as our public facing interface and proxy for the self-hosted services at our house. Specifically, I’ll be using a VPS hosted at Vultr in Atlanta with 2 vCPUs, 2 GB of RAM, and 60 GB of NVMe storage, at a cost of $16 per month. If I moved every other cloud-hosted service I’m paying for behind this proxy (I won’t, but if I did), it would result in more than 90% in cost reduction over the current solution.
ZeroTier
ZeroTier is the overlay network I’ll be using as transport between my in-house self-hosted services and their public-facing edge on the cloud server. ZeroTier provides free clients for linux, mac os, windows, freebsd, android, and iOS, as well as a hosted service they provide, ZeroTier One, which allows for web-based management of one’s overlay network(s) and peers. ZeroTier One is free for non-commercial users, but has a cap of 25 peers across all your overlay networks. As far as I’m concerned, that’s a perfectly reasonable limitation (there are other limitations in terms of functionality more suited to organizational usage than individual usage) but it’s also a cap that my nerdy self has already run into repeatedly. However, ZeroTier is also open source, up to and including running one’s own “planet” servers (the ZeroTier equivalent to DNS’ root servers, used for route discovery and the like). As such, I’ve decided to forego using ZeroTier One’s hosted service and instead will be self-hosting an alternative called ZTNet.
ZTNet
When I first started exploring options for this solution, I found “ZeroTier UI” listed in the list of applications YunoHost can provide, which leverages a project called “ztncui” from an organization called Key Networks. I installed ztncui successfully, and it appeared to do at least some of what I wanted from a ZeroTier UI, but some red flags and missing functionality caused me to continue investigating my options, which ultimately led me to ZTNet. ZTNet did not trigger the red flags that ztncui did for me, and also has what I consider to be better documentation for getting up and running. On top of that, ZTNet provides a path for removing the main ZeroTier planets from your overlay network if you prefer not to have any traffic reaching out to ZeroTier One’s servers. Initially I will be leaving their servers in the mix for my network, as the additional discoverability may be handy while I’m messing around with things. I have tested the functionality, though, and can confirm that my ZeroTier clients were able to communicate with my ZTNet server just fine without the ZeroTier One planets involved.
YunoHost
Mentioned briefly above, YunoHost is my web application hosting platform of choice when it comes to self-hosting services. Based on Debian and build by seasoned systems administrators, YunoHost provides email services, XMPP services, an LDAP-based SSO system for unified user authentication across the hosted services, and a large catalog of web apps that have been packaged for the platform. I use YunoHost to provide my password manager (VaultWarden), a Nextcloud install for our family’s use, webmail, my source code forge, a diagramming tool, my RSS feed reader, my Matrix client, my bookmarking tool, my IRC client, and more – it’s very handy!
YunoHost isn’t really set up to live behind a proxy server elsewhere, so I’m not following their recommendations for much of my setup, but I am willing to take on the additional support burden of being weird, so it should work out. But I’ll need a way to direct traffic arriving at my cloud server to reverse proxy the appropriate backend service hosted on my YunoHost server running as a VM back at the house. Enter NGINX Proxy Manager.
NGINX Proxy Manager
NGINX Proxy Manager (often abbreviated to NPM, which I will not be doing, as NPM is already overloaded as an acronym in the web space) is a containerized solution for setting up NGINX as a reverse proxy to various services along with built-in LetsEncrypt TLS certificate management and a few other bells and whistles. Most importantly for my use case, NGINX Proxy Manager also allows for the proxying of non-web-based protocols as “proxy streams” meaning I can forward plain TCP or UDP traffic arriving at the cloud server to the appropriate backend server and port without having to manually configure IP forwarding rules using iptables/nftables or the like.
Putting It All Together
The overall result is an architecture that flows like this:
- Traffic is received on the public-facing cloud server, via either IPv4 or IPv6
- NGINX Proxy Manager looks at the request and compares the request to the proxying rules that have been configured, forwarding the request appropriately over ZeroTier to the related backend service
- The backend service handles the request as if it had received it directly from the client, sending any response back over the ZeroTier network to NGINX Proxy Server
- NGINX Proxy Server sends the response back to the original requestor like a good proxy server should
This works for web traffic, SMTP submission, IMAP transactions, XMPP chat, and virtually any other type of traffic I may need or want to handle as a self-hoster, apparently – I haven’t completed all of the setup yet. There are some minor caveats (such as stream proxying only being able to handle one backend service per port, so hosting two separate SMTP servers behind a single proxied port 25 is not going to happen; however, that’s not a scenario I need to handle (YunoHost is perfectly capable of handling mail for multiple domains on a single install, so both my mail services on my domains and my wife’s mail services on her domains will work just fine with the single SMTP submission stream)) but overall I’m happy with this setup at this early stage.
Where We Leave Things
So, how’s it working?
Well, it’s coming together. I’m still in the process of getting it all configured and happy as of today, but all the main bits and pieces are in place. My next step is to take a current backup of my cloud-based YunoHost installation and restore the backup on my new in-house YunoHost installation, then set up the appropriate proxying configurations in NGINX Proxy Manager. Update: This bit is partially done now, at least the core YunoHost config and the WordPress app. I’ll restore the other bits over the next few days as time allows.
Depending on how the rest of today goes, and file transfer speeds, you may even be reading this blog post over the proxied connection by the time I hit publish…yep! This post is being served over the above described network of hacktastic structure – you’re seeing it in action right now!
Regardless, stay tuned for more from me on this exercise in self-hosting, and if you have any comments, questions, or recommendations, feel free to pass them along, either by commenting on this post directly or in the fediverse where I’m most available at https://toot-lab.reclaim.technology/@djsundog. Once I have things working and I am happy with the results, I’ll write up another post detailing my installation process and any stumbling blocks I’ve encountered to try and make the path a little easier to navigate in the future, but so far so good.
Happy hacking!
1 thought on “Self-Hosting Whatever, Wherever”