I'm optimistic we will succeed in efforts to simplify linux application / dependency compatibility instead of relying on abstractions that which work around them.

Maybe if you only look at it through the lens of building an app/service, but containers offer so much more than that. By standardizing their delivery through registries and management through runtimes, a lot of operational headaches just go away when using a container orchestrator. Not to mention better utilization of hardware since containers are more lightweight than VMs.

> Not to mention better utilization of hardware

When compared to a VM, yes. But shipping a separate userspace for each small app is still bloat. You can reuse software packages and runtime environments across apps. From an I/O, storage, and memory utilization point of view, it feels baffling to me that containers are so popular.

"bloat" has always been the last resort criticism from someone who has nothing valid. Containers are incredibly light, start very rapidly, and have such low overhead in general that the entire industry has been using them.

Docker containers also do reuse shared components, layers that are shared between containers are not redownloaded. The stuff that's unique at the bottom is basically just going to be the app you want to run.

> From an I/O, storage, and memory utilization point of view, it feels baffling to me that containers are so popular.

Why? It's not virtualization, it's containerization. It's using the host kennel.

Containers are fast.

I was referring to the userspace runtime stack, not the kernel. What I criticize is that multiple containers that share a single host usually overdo it with filesystem isolation. Hundreds of MBs of libraries and tools needlessly duplicated, even though they could just as well have used distro packages and deployed their apps as system-level packages and systemd unit files with `DynamicUser=`.

You can hardly call this efficient hardware utilization.

The duplication is a necessity to achieve the isolation. Having shared devels and hordes of unit files for a multi tenant system is hell - versioning issues can and will break this paradigm, no serious shop is doing this.

For running your own machine, sure. But this would become non maintainable for a sufficiently multi tenant system. Nix is the only thing that really can begin to solve this outside of container orchestration.

Hah indeed that's my perspective. I'm used to being able to compile program, distribute executable, "just works", across win, Linux, MacOs. (With appropriate compile targets set)

Agreed.

I've recently switched from docker compose to process compose and it's super nice not to have to map ports or mount volumes. What I actually needed from docker had to do less with containers and more with images, and nix solves that problem better without getting in the way at runtime.

Assuming I've found the right process-compose [1], it struck me as having much overlap with the features of systemd. Or at least, I would tend to reach for systemd if I wanted something to run arbitrary processes. Is there something additional/better that process-compose does for you?

[1]: https://github.com/F1bonacc1/process-compose

That's the one, although I tend to reference it through https://github.com/juspay/services-flake because that way I end up using the community-maintained configs for whatever well-known services I've enabled (I'll use postgres as an example below, but there are many: https://community.flake.parts/services-flake/services)

What process-compose gives me is a single parent with all of that project's processes as children, and a nice TUI/CLI for scrolling through them to see who is happy/unhappy and interrogating their logs, and when I shut it down all of that project's dependencies shut down. Pretty much the same flow as docker-compose.

It's all self-contained so I can run it on MacOS and it'll behave just the same as on Linux (I don't think systemd does this, could be wrong), and without requiring me to solve the docker/podman/rancher/orbstack problem (these are dependencies that are hard to bundle in nix, so while everything else comes for free, they come at the cost of complicating my readme with a bunch of requests that the user set things up beforehand).

As a bonus, since it's a single parent process, if I decide to invoke it through libfaketime, the time inherited by subprocess so it's consistently faked in the database and the services and in observability tools...

My feeling for systemd is that it's more for system-level stuff and less for project-level dependencies. Like, if I have separate projects which need different versions of postgres, systemd commands aren't going to give me a natural way to keep track of which project's postgres I'm talking about. process-compose, however, will show me logs for the correct postgres (or whatever service) in these cases:

    ~/src/projA$ process-compose process logs postgres
    ~/src/projB$ process-compose process logs postgres
This is especially helpful because AI agents tend to be scoped to working directory. So if I have one instance of claude code on each monitor and in each directory, which ever one tries to look at postgres logs will end up looking at the correct postgres's logs without having to even know that there are separate ones running.

Basically, I'm alergic to configuring my system at all. All dependencies besides nix, my text editor, and my shell are project level dependencies. This makes it easy to hop between machines and not really care about how they're set up. Even on production systems, I'd rather just clone the repo `nix run` in that dir (it then launches process compose which makes everything just like it was in my dev environment). I am however not in charge of any production systems, so perhaps I'm a bit out of touch there.

I'm curious why. To me "We updated our library to change some things in a way that's an improvement on net but only mostly backwards compatible" seems like an extremely common instinct in software development. But in an environment where people are doing that all the time, the only way to reliably deploy software is to completely freeze all your direct and indirect dependencies at an exact version. And Docker is way better at handling that than traditional Linux package managers are.

Why do you think other tools will make a comeback?

You can write any software you want without worrying about depending on a specific set of system dependencies. I like software that "just works", and making something that will give you inscrutable linking or dependency errors if the OS isn't set up just so is a practice I think should go away.

I am also optimistic we will succeed in efforts to properly annotate the data on the Internet with useful and accurate meta-data and achieve the semantic web vision instead of relying on search engines and LLMs.

[dead]