Lots of solutions appear to work in a steady-state scenario—which, admittedly, is most of the time. The key question is how resilient to failure they are, not just under blackout conditions but brownouts as well.
Many people will read a comment like this and cargo-cult an implementation (“millions of workloads”, you say?!) without knowing how they are going to handle the many different failure modes that can result, or even at what scale the solution will break down. Then, when the inevitable happens, panic and potentially data loss will ensue. Or, the system will eventually reach scaling limits that will require a significant architectural overhaul to solve.
TL;DR: There isn’t a one-size-fits-all solution for most distributed consensus problems, especially ones that require global consistency and fault tolerance, and on top of that have established upper bounds on information propagation latency.