At $WORK we’ve been using the Zalando Postgres kubernetes operator to great success: https://github.com/zalando/postgres-operator
As someone who has operated Postgres clusters for over a decade before k8s was even a thing, I fully recommend just using a Postgres operator like this one and moving on. The out of box config is sane, it’s easy to override things, and failover/etc has been working flawlessly for years. It’s just the right line between total DIY and the simplicity of having a hosted solution. Postgres is solved, next problem.
For something like a database, what is the added advantage to using Kubernetes as opposed to something simple like Docker Compose?
In this case the advantage are operators for running postgres.
With Docker Compose, the abstraction level you're dealing with is containers, which means in this case you're saying "run the postgres image and mount the given config and the given data directory". When running the service, you need to know how to operate the software within the container.
Kubernetes at its heart is an extensible API Server, which allows so called "operators" to create custom resources and react to them. In the given case, this means that a postgres operator defines for example a PostgresDatabaseCluster resource, and then contains control loops to turn these resources into actual running containers. That way, you don't necessarily need to know how postgres is configured and that it requires a data directory mount. Instead, you create a resource that says "give me a postgres 15 database with two instances for HA fail-over", and the operator then goes to work and manages the underlying containers and volumes.
Essentially operators in kubernetes allow you to manage these services at a much higher level.
Docker Compose (ignoring Swarm which seems to be obsolete) manages containers on a single machine. With Kubernetes, the pod that hosts the database is a pod like any other (I assume). It gets moved to a healthy machine when node goes bad, respects CPU/mem limits, works with generic monitoring tools, can be deployed from GitOps tools etc. All the k8s goodies apply.
When it comes to a DB moving the process around is easy, it's the data that matters. The reason bare-metal-hosted DBs are so fast is that they use direct-attach storage instead of networked storage. You lose those speed advantages if you move to distributed storage (Ceph/etc).
You don’t need to use networked storage, the zalando postgres operator just uses local storage on the host. It uses a StatefulSet underneath so that pods will stay on the same node until you migrate them.
But if I'm pinning it to dedicated machines then Kubernetes does not give me anything, but I still have to deal with its tradeoffs and moving parts - which from experience are more likely to bring me down than actual hardware failure.
It’s not like anyone’s recommending you setup k8s just to use Postgres. The advice is that, if you’re already using k8s, the Postgres operator is pretty great, and you should try it instead of using a hosted Postgres offering or having a separate set of dedicated (non-k8s) servers just for Postgres.
I will say that even though the StatefulSet pins the pod to a node, it still has advantages. The StatefulSet can be scaled to N nodes, and if one goes down, failover is automatic. Then you have a choice as an admin to either recover the node, or just delete the pod and let the operator recreate it on some other node. When it gets recreated, it resyncs from the new primary and becomes a replica and you’re back to full health, it’s all pretty easy IMO.
I run PostgreSQL+Patroni on Kubernetes where each instance is a separate StatefulSet pinned to dedicated hosts, with data on local ZFS volumes, provisioned by the OpenEBS controller.
I do this for multiple reasons, one is that I find it easier to use Kubernetes as the backend for Patroni, rather than running/securing/maintaining just another etcd cluster. But I also do it for observability, it's much nicer to be able to pull all the metrics and logs from all the components. Sure, it's possible to set that up without Kubernetes, but why if I can have the logs delivered just one way. Plus, I prefer how self-documenting the whole thing is. No one likes YAML manifests, but they are essentially running documentation that can't get out of sync.
The assumption is that you’re already using Kubernetes, sorry.
Docker compose has always been great for running some containers on a local machine, but I’ve never found it to be great for deployments with lots of physical nodes. k8s is certainly complex, but the complexity really pays off for larger deployments IMO.
I hate that this is starting to sound like a bot Q&A, but the primary advantages is that it provides secure remote configuration and it's that it's platform agnostic, multi-node orchestration, built in load balancing and services framework, way more networking control than docker, better security, self healing and the list goes on, you have to read more about it to really understand the advantages over docker.