> It brings the power of edge computing to your own infrastructure.

I like the idea of self-hosting, but it seems fairly strongly opposed to the concept of edge computing. The edge is only made possible by big ass vendors like Cloudflare. Your own infrastructure is very unlikely to have 300+ points of presence on the global web. You can replicate this with a heterogeneous fleet of smaller and more "ethical" vendors, but also with a lot more effort and downside risk.

But do you need 300 pops to benefit from the edge model? Or would 10 pops in your primary territory be enough.

For most applications 1 location is probably good enough.I assume HN is single location and I am a lomg way from CA but have no speed issues.

Cavaet for high scale sites and game servers. Maybe for image heavy sites too (but self hosting then adding a CDN seems like a low lock in and low cost option)

> For most applications 1 location is probably good enough.

If your usecase doesn't require redundancy or high-availability, why would you be using something like Cloudflare to start with?

Security. I host personal sites on Linodes and other external servers. There are no inbound ports open to the world. Everything is accessed via Cloudflare Tunnels and locked down via their Zero Trust services. I find this useful and good, as I don't really want to have to develop my personal services to the point where I'd consider them hardened for public internet access.

Not even ssh? What happens if cloudflare goes down?

Not oc, but services like Linode often offer "console" access via a virtualized tty for VPS systems.

Having a local backup user is a viable backup path then. If you wire up pam enough you can even use MFA for local login.

Then I log in to Linode or whatever and open a hole up in the firewall. That's easy. But Cloudflare rarely goes down, not really something I worry about.

You could restrict the ssh port by ip as well.

The DX is great: simple deployment, no containers, no infra to manage. I build a lot of small weekend projects that I don't want to maintain once shipped. OpenWorkers gives you the same model when you need compliance or data residency.

When you have a simple tool you have written for yourself, that you need to be reliable and accessible but also that you don't use frequently enough that it's worth the bother of running on your own server with all of that setup and ongoing maintenance.

Free bandwidth. (Also the very good sibling-answer about tunnels).

Cloudflare gives me free resources. If they tomorrow reduced my blog to be available on a single region only, I'd shrug and move on with my day.

It takes a minute to setup for CDN usecase.

Price

Honestly, for my own stuff I only need one PoP to be close to my users. And I've avoided using Cloudflare because they're too far away.

More seriously, I think there's a distinction between "edge-style" and actual edge that's important here. Most of the services I've been involved in wouldn't benefit from any kind of edge placement: that's not the lowest hanging fruit for performance improvements. But that doesn't mean that the "workers" model wouldn't fit, and indeed I suspect that using a workers model would help folk architect their stuff in a form that is not only more performant, but also more amenable to edge placement.

> But do you need 300 pops to benefit from the edge model? Or would 10 pops in your primary territory be enough.

I don't think that the number of PoPs is the key factor. The key factor is being able to route requests based on a edge-friendly criteria (latency, geographical proximity, etc) and automatically deploy changes in a way that the system ensures consistency.

This sort of projects do not and cannot address those concerns.

Targeting the SDK and interface is a good hackathon exercise, but unless you want to put together a toy runtime to do some local testing, this sort of project completely misses the whole reason why this sort of technology is used.

I agree, latency is very important and 300 pops is great, but seems more for marketing and would see diminishing returns for the majority of applications.

many apps are fine on a single server

Is some sort of decentralised network of hosts somehow working together to challenge the Cloudflare hegemony even plausible? Would it be too difficult to coordinate in a safe and reliable way?

If you have a central database, what benefits are you getting from edge compute? This is a serious question. As far as I understand edge computing is good for reducing latency. If you have to communicate with a non-edge database anyway, is there any advantage from being on the edge?

Databases in Cloudflare are not edge. That is, they are tied to a central location. Where workers help is async stateless tasks. There are a lot of these (authentication, email, notifications, etc.)

It has edge replicas though. You're talking about d1, right?

Well you can cache stuff and also use read replicas. But yes, you are correct. For 'write' it doesn't help as much to say the least. But for some (most?) sites they are 99.9% read...