For the package management, it depends on the package manager, but most have some mechanism for installing into a root other than the currently running system.
Even without explicit support in the pacakage manager, you could also roll your own solution by running the package manager in a chroot environment, which would then need to be seeded with the package manager's own dependencies, of course (and use user-mode qemu to run pre- and post-installation scripts within the chroot in the case of cross-architecture builds).
Whether this yields a minimal container when pointed at a repository intended to be used to deploy a full OS is another question, but using a package manager to build a root filesystem offline isn't hard to pull off.
As for how to do this in the context of building an OCI container, tools like Buildah[1] exist to support container workflows beyond the conventional Dockerfile approach, providing straightforward command line tools to create containers, work with layers, mount and unmount container filesystems, etc.
There have got to be a million ways to do this by now. Some of the more principled approaches are tools like Nix (https://xeiaso.net/talks/2024/nix-docker-build/) and Bazel (https://github.com/bazel-contrib/rules_oci). But if you want to use an existing package manager like apt, you can pick it apart. Apt calls dpkg, and dpkg extracts files and runs post-install scripts. Only the post-install script needs to run inside the container.
I may be a little out of touch here, because the last time I did this, we used a wholly custom package manager.
With Red Hat's UBI Micro:
(from https://www.redhat.com/en/blog/introduction-ubi-micro published in 2021)For the package management, it depends on the package manager, but most have some mechanism for installing into a root other than the currently running system.
Even without explicit support in the pacakage manager, you could also roll your own solution by running the package manager in a chroot environment, which would then need to be seeded with the package manager's own dependencies, of course (and use user-mode qemu to run pre- and post-installation scripts within the chroot in the case of cross-architecture builds).
Whether this yields a minimal container when pointed at a repository intended to be used to deploy a full OS is another question, but using a package manager to build a root filesystem offline isn't hard to pull off.
As for how to do this in the context of building an OCI container, tools like Buildah[1] exist to support container workflows beyond the conventional Dockerfile approach, providing straightforward command line tools to create containers, work with layers, mount and unmount container filesystems, etc.
[1] https://github.com/containers/buildah/blob/main/README.md
There have got to be a million ways to do this by now. Some of the more principled approaches are tools like Nix (https://xeiaso.net/talks/2024/nix-docker-build/) and Bazel (https://github.com/bazel-contrib/rules_oci). But if you want to use an existing package manager like apt, you can pick it apart. Apt calls dpkg, and dpkg extracts files and runs post-install scripts. Only the post-install script needs to run inside the container.
I may be a little out of touch here, because the last time I did this, we used a wholly custom package manager.
apk and xbps can do this. You specify a different root to work in.
Most Makefiles allow you to specify an alternate DESTDIR on install.