For self-use author has a point, but for public-facing sites not so much, because:

- infra work is thankless (see below)

- outages will last long because you're unlikely to have failovers (for disk failures, etc.), plus the time to react to these (no point in being paged for hobby work)

- more importantly, malicious LLM scrapers will put your infra under stress, and

- if you host large executable you'll likely want to do things like banning Microsoft's IP address because of irresponsible GH Actions users [1] [2] [3]

In the end it is just a lot less stress to pay someone else to deal with infra; for example, when hosting static sites on GH Pages or CF Pages, and when using CF caching solutions.

[1] https://www.theregister.com/2023/06/28/microsofts_github_gmp...

[2] https://news.ycombinator.com/item?id=36380325

[3] https://github.com/actions/runner-images/issues/7901

We used to host production websites this way as recently as 10-15 years ago just fine. These days you can do it with as few as two machines and a good router or two. The main risk is power outages due to non-redundant power outside of a colo (solvable with a battery backup) and non-redundant public internet links (potentially solvable with a cellular failover + a lot of caching at the CDN, depending on the application).

You generally still use a CDN and WAF to filter incoming traffic when you self host (even without abusive scrapers you should probably do this for client latency). You can also serve large files from a cloud storage bucket for external users where it makes sense.

Only worth paying if you actually need it though.

And if it's a hobby, no you don't, that should be part of it, the fun is getting nocked out from orbit and figuring out how and why and how to avoid it. Stand back up again and you've learned from that mistake :p