That's a cute thought not grounded in reality.

The infra may be cattle but debugging via anal probe err SSH is still the norm.

Technically, if your infrastructure is cattle, there ain't no debugging them. If one get sick, you destroy and replace.

At my organization, we run 1000-node kubernetes clusters. Technically, I can SSH into the worker nodes, which I verified during setup. Have never done it. When a node reports as unhealthy, k8s kills and replaces it, and I don't really care why it went unhealthy.

We have a few SSH "bastion" hosts used for proxying into various databases, but those are so locked down that no one besides my team has shell access.

For developers, they get Grafana for logs/traces and various tools they prefer, like AppSignal. If something is exploding in production, that's it. No shells, no `kube exec`. It was very painful at first, but now our software has great self-reporting built in, negating the need for local access for troubleshooting.