So if i have a docker container which needs a handful of packages, you would handle it how?

I'm handling it by using a slim debian or ubuntu, then using apt to install these packages with necessary dependencies.

For everything easy, like one basic binary, I use the most minimal image but as soon as it gets just a little bit annoying to set it up and keep it maintained, i start using apt and a nightly build of the image.

The same way you may require something like cmake as a build dependency but not have it be part of the resulting binary - separate build time and run time dependencies so you only distribute the relevant ones.

IMO—package manager outside the container. You just want the packages inside the container; the manager can sit outside and install packages into the container.

Your question feels insane to me for production environments. Why aren't you doing a version cutoff of your packages and either pulling them from some network/local cache or baking them into your images?

That local cache is often implemented as a drop-in replacement for the upstream package repository, and packages are still installed with the same package manager (yum,apt,pip,npm).

Aforementioned security vulnerabilities don’t strike as a potential reason to you?

Friend, considering the supply chain attacks going on these days, automatically updating everything, immediately, probably isn't the perfect move either.

You need to automatically update from a trusted source. That source better audit and update constantly. Which is hard.

Ignoring the real benefits of security updates to prevent the unlikely event of supply chain attacks sounds like a weird tradeoff.