All docker containers should have been like that. apt-get update in a docker build step is an anti pattern.

You are screwed either way. If you don't update your container has a ton of known security issues, if you do the container is not reproducable. reproducable is neat with some useful security benefits, but it is something a non goal if the container is more than a month old - day might even be a better max age.

Why is there a need for a package manager inside a container at all? Aren't they supposed to be minimal?

Build your container/vm image elsewhere and deploy updates as entirely new images or snapshots or whatever you want.

Personally I prefer buildroot and consider VM as another target for embedded o/s images.

So if i have a docker container which needs a handful of packages, you would handle it how?

I'm handling it by using a slim debian or ubuntu, then using apt to install these packages with necessary dependencies.

For everything easy, like one basic binary, I use the most minimal image but as soon as it gets just a little bit annoying to set it up and keep it maintained, i start using apt and a nightly build of the image.

I update my docker containers regularly but doing it in a reproducible, auditable, predictable way

Could you explain how you achieve this?

If you are on github/gitlab, renovate bot is a good option for automating dependency updates via PRs while still maintaining pinned versions in your source.

Chainguard, Docker Inc’s DHI etc. There’s a whole industry for this.

I know it's an anti-pattern, but what is the alternative if you need to install some software? Pulling its tagged source code, gcc and compile everything?

Copying from another image is an under appreciated feature

FROM ubuntu:24.04

COPY --from=ghcr.io/owner/image:latest /usr/local/bin/somebinary /usr/local/bin/somebinary

CMD ["somebinary"]

Not as simple when you need shared dependencies

Run “nix flake update”. Commit the lockfile. Build a docker image from that; the software you need is almost certainly there, and there’s a handy docker helper.

Recently I’ve been noticing that Nix software has been falling behind. So “the software you need is almost certainly there” is less true these days. Recently = April 2026.

Are you referring to how the nixpkgs-unstable branch hasn't been updated in the past five days? Or do you have some specific software in mind? (not arguing, just curious)

It’s a variety of different software that just isn’t updated very often.

I don’t mind being somewhat behind, but it seems like there are a lot of packages that don’t get regular updates. It’s okay to have packages that aren’t updated, but those packages should be clearly distinguishable.

oh, great, adding more dependency, and one that just had serious security problem

as if other sandboxing software is perfect

Nothing is perfect. (FreeBSD jails come close but still no.)

Both Debian and Ubuntu provide snapshot mirrors where you can specify a date to get the package lists as they looked at that time.

Which is only useful for historical invesigation - the old snapshot has security holes attackers know how to exploit.

> the old snapshot has security holes attackers know how to exploit.

So is running `docker build` and the `RUN apt update` line doing a cache hit, except the latter is silent.

The problem solved by pinning to the snapshot is not to magically be secure, it's knowing what a given image is made of so you can trivially assert which ones are safe and which ones aren't.

In both cases you have to rebuild an image anyway so updating the snapshot is just a step that makes it explicit in code instead of implicit.

where does the apt update connect to? If it is an up to date package repo you get fixes. Howerer there are lots of reasons it would not. You better know if this is your plan.

With a binary cache that is not so bad, see for example what nix does.

I don't really see how that's different from a normal binary install of a reproducible package. Especially with the lacking quality of a lot of Nix packages.

If you're in a situation where you want reproducibility you're using nix to build your own packages anyways, not relying on their packages

It's not if you can pin the package. It gives you reproducable docker containers without having to rebuild the world. Wasn't that the entire question?

pretend you don't do it and add your extra software to the layer above

base image

software component image

both should be version pinned for auditing

I disagree with that as a hard rule and with the opinion that it's an anti-pattern. Reproducible containers are fine, but not always necessary. There's enough times when I do want to run apt-get in a container and don't care about reproducibility.

This is to solve such issues that I am using and running StableBuild.

It is a managed service that keeps a cached copy of your dependencies at a specific time. You can pin your dependencies within a Dockerfile and have reproducible docker images.

I don't wanna be that guy but...

NIX FIXES THIS.

So does Bazel. :p

adding to the list, one exotic approach to this problem is stagex https://codeberg.org/stagex/stagex

This has been a solved problem for over two decades now with Nix but people can't be asked

It has been solved even without Nix for a long time, just laziness is probably why we are not doing it