The article's dystopia section is dramatic but the practical point is real. I've been self-hosting more and more over the past year specifically because I got uncomfortable with how much of my stack depended on someone else's servers.

Running a VPS with Tailscale for private access, SQLite instead of managed databases, flat files synced with git instead of cloud storage. None of this requires expensive hardware, it just requires caring enough to set it up

> with Tailscale for private access

FWIW might want to check https://github.com/wg-easy/wg-easy to remove yet another managed elsewhere piece of your setup.

Or Headscale, which has the blessing of Tailscale and contributions from some of their employees. https://github.com/juanfont/headscale

thanks for sharing, will check it out!

Depending on someone else's servers isn't that different from depending on someone else's software, which unfortunately we all must do. Unfathomable reams of it, with a growth curve that recently went vertical. I guess the crucial difference is that someone else's servers can be taken away in a flash, while someone else's (FOSSl software can't.

You are missing one important part: maintenance. While on a managed service, dozens of hours of maintenance are done by someone, when you are self-hosting, you'll be doing 3 times that, because you can't know all the details of making so many tools work, because each tool will have to be upgraded at some point and the upgrade will fail, because you have to test you backups, and many many more things to do in the long run.

So yeah, it's fun. But don't under-estimate that time, it could easily be your time spent with friend or family.

I have been self hosting for years. The maintenance is minimal to nonexistent. You are conflating modern SaaS with a stable OSS docker image.

Keeping services running is fairly trivial. Getting to parity with the operationalization you get from a cloud platform takes more ongoing work.

I have a homelab that supports a number of services for my family. I have offsite backups (rsync.net for most data, a server sitting at our cottage for our media library), alerting, and some redundancy for hardware failures.

Right now, I have a few things I need to fix: - one of the nodes didn't boot back up after a power outage last fall; need to hook up a KVM to troubleshoot - cottage internet has been down since a power outage, so those backups are behind (I'm assuming it's something stupid, like I forgot to change the BIOS to power on automatically on the new router I just put in) - various services occasionally throw alerts at me

I have a much more complex setup than necessary (k8s in a homelab is overkill), but even the simplest system still needs backups if you care at all about your data. To be fair, cloud services aren't immune to this, either (the failure mode is more likely to be something like your account getting compromised, rather than a hardware failure).

A hidden cost of self-hosting.

I love self-hosting and run tons of services that I use daily. The thought of random hardware failures scares me, though. Troubleshooting hardware failure is hard and time consuming. Having spare minipcs is expensive. My NAS server failing would have the biggest impact, however.

Other than the firewall (itself a minipc), I only have one server where a failure would cause issues: it's connected to the HDDs I use for high-capacity storage, and has a GPU that Jellyfin uses for transcoding. That would only cause Jellyfin to stop working—the other services that have lower storage needs would continue working, since their storage is replicated across multiple nodes using Longhorn.

Kubernetes adds a lot of complexity initially, but it does make it easier to add fault tolerance for hardware failures, especially in conjunction with a replicating filesystem provider like Longhorn. I only knew that I had a failed node because some services didn't come back up until I drained and cordoned the node from the cluster (looks like there are various projects to automate this—I should look into those).

This point is oversold.

Sure - self hosting takes a bit more work. It usually pays for itself in saved costs (ex - if you weren't doing this work, you're paying money which you needed to do work for to have it done for you.)

Cloud costs haven't actually gotten much cheaper (but the base hardware HAS - even now during these inflated costs), and now every bit of software tries to bill you monthly.

Further, if you're not putting services open on the web - you actually don't need to update all that often. Especially not the services themselves.

Honestly - part of the benefit of self-hosting is that I can choose whether I really want to make that update to latest, and whether the features matter to me. Often... they don't.

---

Consider: Most people are running outdated IP provided routers with known vulnerabilities that haven't been updated in literally years. They do ok.

Much easier with AI. Went from Webhosting all-in package + NAS to Hetzner Storage Share and a separate Emailprovider (Runbox). After a short time I dumped the Nextcloud instance and moved on to a Hetzner VPS with five docker containers, Caddy, proper authentication and all. Plus a Storage Box. Blogging/Homepage as Cloudflare Pages, fed by Github, domains from CF and porkbun, Tailscale, etc., etc. ad nauseam, NAS still there.

Most of this I didn't for many years because it is not my core competence (in particular the security aspects). Properly fleshed-out explanations from any decent AI will catapult you to this point in no time. Maintenance? Almost zero.

p.s. Admittedly, it's not a true self-hosting solution, but the approach is similar and ultimately leads to that as well.

Since using NixOS for my home server, I've found it to Just Work™ flawlessly every time.

If anyone reading this has struggled with servers accumulating cruft, and requiring maintainance, I recommend NixOS.

Agreed. NixOS + Tailscale is 99% there for me. Using Claude Code to deal with whatever other package I need built with nix while I'm working on $day_job things helps get me to a fully working system. Besides the fact that running containers via podman or docker (your choice) is super easy via a NixOS config.

Combine that with deploy-rs or similar and you have a very very stable way to deploy software with solid rollback support and easy to debug config issues (it's just files in the ./result symlink!)

yes, I do agree with that sentiment, there are times when I'm spending way too much time restarting a service that went down, but it doesn't take as long as it used to, especially with AI assistance nowadays. If I'm spending too much time on it, then I'm also probably learning something along the way, so I don't mind spending that time.

There are a lot of people that have made a lot of money and careers because developers in particular don't want to know or don't care to know how to manage this stuff.

They need to get over it.

Pick up some Ansible and or Terraform/tofu and automate away. It can be easy or as involved as you want it to be.