>In our Docker Compose world, this problem didn’t exist. Services ran where we told them to run.

This is really interesting.

One of the big selling points of Kubernetes is that it takes care of scheduling on its own, distributes replicas and so on. This is especially useful when you are autoscaling pods.

But when you don't need autoscaling, especially if you have a limited amount of microservices, you may as well deploy your applications on the nodes you want them to run on. And running a script on a single node or 3 doesn't really make a difference (even better if you can parallelize, but maybe it's not even necessary).

Yes you could do the same with a mix of labels and advanced scheduling configurations, but if this is the main (or only) reason you use Kubernetes, and you don't really need autoscaling, Docker Compose or something similar makes sense.