always wondered at what scale gossip / SWIM breaks down and you need a hierarchy / partitioning. fly's use of corrosion seems to imply it's good enough for a single region which is pretty surprising because iirc Uber's ringpop was said to face problems at around 3K nodes.
it would be super cool to learn more about how the world's largest gossip systems work :)
SWIM is probably going to scale pretty much indefinitely. The issue we have with a single global SWIM broadcast domain isn't that the scale is breaking down; it's just that the blast radius for bugs (both in Corrosion itself, and in the services that depend on Corrosion) is too big.
We're actually keeping the global Corrosion cluster! We're just stripping most of the data out of it.
Back of napkin math I’ve done previously, it breaks down around 2 million members with Hashicorps defaults. The defaults are quite aggressive though and if you can tolerate seconds of latency (called out in the article) you could reach billions without a lot of trouble.
It's also frequency of changes and granularity of state, when sizing workloads. My understanding is that most Hashi shops would federate workloads of our size/global distribution; it would be weird to try to run one big cluster to capture everything.
From my literal conversation I'm having right now, 'try to run one big cluster to capture everything' is our active state. I've brought up federation a bunch of times and it's fallen on deaf ears. :)
We are probably past the size of the entirety of fly.io for reference, and maintenance is very painful. It works because we are doing really strange things with Consul (batch txn cross-cluster updates of static entries) on really, really big servers (4gbps+ filesystems, 1tb memory, 100s of big and fast cores, etc).
Who orchestrates the orchestrators? is the question we’ve never answered at HashiCorp. We tried expanding Consul’s variety of tenancy features, but if anything it made the blast radius problem worse! Nomad has always kept its federation lightweight which is nice for avoiding correlated failures… but we also never built much cluster management into federated APIs. So handling cluster sprawl is an exercise left to the operator. “Just rub some terraform on it” would be more compelling if our own products were easier to deploy with terraform! Ah well, we’ll keep chipping away at it.