I know it's an anti-pattern, but what is the alternative if you need to install some software? Pulling its tagged source code, gcc and compile everything?
I know it's an anti-pattern, but what is the alternative if you need to install some software? Pulling its tagged source code, gcc and compile everything?
Copying from another image is an under appreciated feature
FROM ubuntu:24.04
COPY --from=ghcr.io/owner/image:latest /usr/local/bin/somebinary /usr/local/bin/somebinary
CMD ["somebinary"]
Not as simple when you need shared dependencies
Run “nix flake update”. Commit the lockfile. Build a docker image from that; the software you need is almost certainly there, and there’s a handy docker helper.
Recently I’ve been noticing that Nix software has been falling behind. So “the software you need is almost certainly there” is less true these days. Recently = April 2026.
Are you referring to how the nixpkgs-unstable branch hasn't been updated in the past five days? Or do you have some specific software in mind? (not arguing, just curious)
It’s a variety of different software that just isn’t updated very often.
I don’t mind being somewhat behind, but it seems like there are a lot of packages that don’t get regular updates. It’s okay to have packages that aren’t updated, but those packages should be clearly distinguishable.
oh, great, adding more dependency, and one that just had serious security problem
as if other sandboxing software is perfect
Nothing is perfect. (FreeBSD jails come close but still no.)
Both Debian and Ubuntu provide snapshot mirrors where you can specify a date to get the package lists as they looked at that time.
Which is only useful for historical invesigation - the old snapshot has security holes attackers know how to exploit.
> the old snapshot has security holes attackers know how to exploit.
So is running `docker build` and the `RUN apt update` line doing a cache hit, except the latter is silent.
The problem solved by pinning to the snapshot is not to magically be secure, it's knowing what a given image is made of so you can trivially assert which ones are safe and which ones aren't.
In both cases you have to rebuild an image anyway so updating the snapshot is just a step that makes it explicit in code instead of implicit.
where does the apt update connect to? If it is an up to date package repo you get fixes. Howerer there are lots of reasons it would not. You better know if this is your plan.
With a binary cache that is not so bad, see for example what nix does.
I don't really see how that's different from a normal binary install of a reproducible package. Especially with the lacking quality of a lot of Nix packages.
If you're in a situation where you want reproducibility you're using nix to build your own packages anyways, not relying on their packages
It's not if you can pin the package. It gives you reproducable docker containers without having to rebuild the world. Wasn't that the entire question?
pretend you don't do it and add your extra software to the layer above
base image
software component image
both should be version pinned for auditing