> If it can't fit on one node, do you really need a distributed queue? (Alternative: good ol' load balancing and REST API's, maybe with async semantics and retry semantics)

That sounds distributed to me, even if it wires different tech together to make it happen. Is there something about load balancing REST requests to different DB nodes that is less complicated than Kafka?

> Is there something about load balancing REST requests to different DB nodes that is less complicated than Kafka?

To be clear I wasn't talking about DB nodes, I was talking about skipping an explicit queue altogether.

But let's say you were asking about load balancing REST requests to different backend servers:

Yes, in the sense that "load balanced REST microservice with retry logic" is such a common pattern that is better understood by SWE's and SRE's everywhere.

No, in the sense that if you really did just need a distributed queue then your life would be simpler reusing a battle-tested implementation instead of reinventing that wheel.