Recompressing should be guaranteed deterministic. It’s the packing/unpacking of tar archives to/from directories on disk that leads to the non-determinism (such as timestamps and ownership metadata). If the tar is left intact, both zstd and gzip should produce byte for byte identical outputs given the same compression parameters.

That is not correct. You would have to use the same compression tool (and likely version) for this to match.

Old docker discarded the compressed bits but kept some metadata about the the so it can at least recreate the tar.

It also recreated the manifest o push.

Thanks for the correction. I did mean given the same tooling version/parameters, but (as you and others pointed out) preserving and recreating that state is not at all straightforward.

You are correct; I confused archiving with compression. However, even considering only the compression process, same compression parameters cannot be guaranteed, as it is unknown which compression parameters the image publisher used.

Thats true. And regardless of compressed vs regular tar, I think the OCI format working with opaque archives is extremely limiting. I hope the industry will eventually redesign to use content addressable storage per file and have metadata to describe the layer/disk layout instead. That would allow per file deduplication, and we can use tar for just bulk transfer over the wire, rather than using tar for the data at rest.

containerd 2.3 has support for erofs which does a direct import of the layer. It can even convert the tar based layers to erofs, faster than extracting the tar normally.

Also looking at block-based content store so that blocks can be deduped across images.