Over 20 year I've had lots of clients on self-hosted, even self-hosting SQL on the same VM as the webserver as you used to in the long distant past for low-usage web apps.
I have never, ever, ever had a SQL box go down. I've had a web server go down once. I had someone who probably shouldn't have had access to a server accidentally turn one off once.
The only major outage I've had (2/3 hours) was when the box was also self-hosting an email server and I accidentally caused it to flood itself with failed delivery notices with a deploy.
I may have cried a little in frustration and panic but it got fixed in the end.
I actually find using cloud hosted SQL in some ways harder and more complicated because it's such a confusing mess of cost and what you're actually getting. The only big complication is setting up backups, and that's a one-off task.
Disks go bad. RAID is nontrivial to set up. Hetzner had a big DC outage that lead to data loss.
Off site backups or replication would help, though not always trivial to fail over.
As someone who has set this up while not being a DBA or sysadmin.
Replication and backups really aren’t that difficult to setup properly with something like Postgres. You can also expose metrics around this to setup alerting if replication lag goes beyond a threshold you set or a backup didn’t complete. You do need to periodically test your backups but that is also good practice.
I am not saying something like RDS doesn’t have value but you are paying a huge premium for it. Once you get to more steady state owning your database totally makes sense. A cluster of $10-20 VPSes with NVMe drives can get really good performance and will take you a lot farther than you might expect.
I think the pricing of the big three is absurd, so I'm on your side in principle. However, it's the steady state that worries me. When the box has been running for 4 years and nobody who works there has any (recent) experience operating postgres anymore. That shit makes me nervous.
More than that, it's easier than it ever was to setup but we live in the post-truth world where nobody wants to own their shit (both figuratively and concretely) ...
Yes. Also you can have these replicas of Postgres across regions.
Even easier with sqlite thanks to litestream.
datasette and datasette-lite (WASM w/pyodide) are web UIs for SQLite with sqlite-utils.
For read only applications, it's possible to host datasette-lite and the SQLite database as static files on a redundant CDN. Datasette-lite + URL redirect API + litestream would probably work well, maybe with read-write; though also electric-sql has a sync engine (with optional partial replication) too, and there's PGlite (Postgres in WebAssembly)
For this kind of small scale setup, a reasonable backup strategy is all you need for that. The one critical part is that you actually verify your backups are done and work.
Hardware doesn't fail that often. A single server will easily run many years without any issues, if you are not unlucky. And many smaller setups can tolerate the downtime to rent a new server or VM and restore from backup.
One thing that will always stick in my mind is one time I worked at a national Internet service provider.
The log disk was full or something. That's not the shameful part though. What followed is a mass email saying everyone needs to update their connection string from bla bla bla 1 dot foo dot bar to bla bla bla 2 dot foo dot bar
This was inexcusable to me. I mean this is an Internet service provider. If we can't even figure out DNS, we should shut down the whole business and go home.
They, do, it isn't, cloud providers also go bad.
> Off site backups or replication would help, though not always trivial to fail over.
You want those regardless of where you host
So can the cloud, and cloud has had more major outages in the last 3 months than I've seen on self-hosted in 20 years.
Deploys these days take minutes so what's the problem if a disk does go bad? You lose at most a day of data if you go with the 'standard' overnight backups, and if it's mission critical, you will have already set up replicas, which again is pretty trivial and only slightly more complicated than doing it on cloud hosts.
> ...you will have already set up replicas, which again is pretty trivial and only slightly more complicated than doing it on cloud hosts.
Even on PostgreSQL 18 I wouldn't describe self hosted replication as "pretty trivial". On RDS you can get an HA replica (or cluster) by clicking a radio box.
Not as often as you might think. Hardware doesn’t fail like it used to.
Hardware also monitors itself reasonably well because the hosting providers use it.
It’s trivial to run a mirrored containers on two separate proxmox nodes because hosting providers use the same kind of stuff.
Offsite backups and replication? Also point and click and trivial with tools like Proxmox.
RAID is actually trivial to setup.l if you don’t compare it to doing it manually yourself from the command line. Again, tools like Proxmox make it point and click and 5 minutes of watching from YouTube.
If you want to find a solution our brain will find it. If we don’t we can find reasons not to.
> if you don’t compare it to doing it manually yourself
Even if you do ZFS makes this pretty trivial as well.
> RAID is nontrivial to set up.
Skill issue?
It's not 2003, modern volume-managing filesystems (eg:ZFS) make creating and managing RAID trivial.