This was always my issue with pip and venv: I don’t want a thing that hijacks my terminal and PATH, flips my world upside down and makes writing automated headless scripts and systemd services a huge pain.

When I drop into a Node.js project, usually some things have changed, but I always know that if I need to, I can find all of my dependencies in my node_modules folder, and I can package up that folder and move it wherever I need to without breaking anything, needing to reset my PATH or needing to call `source` inside a Dockerfile (oh lord). Many people complain about Node and npm, but as someone who works on a million things, Node/npm is never something I need to think about.

Python/pip though… Every time I need to containerize or setup a Python project for some arbitrary task, there’s always an issue with “Your Linux distro doesn’t support that version of Python anymore”, forcing me to use a newer version than the project wants and triggering an avalanche of new “you really shouldn’t install packages globally” messages, demanding new —yes-destroy-my-computer-dangerously-and-step-on-my-face-daddy flags and crashing my automated scripts from last year.

And then there’s Conda, which has all of these problems and is also closed source (I think?) and has a EULA, which makes it an even bigger pain to automate cleanly (And yes I know about mamba, and miniconda, but the default tool everyone uses should be the one that’s easy to work with).

And yes, I know that if I was a full-time Python dev there’s a “better way” that I’d know about. But I think a desirable quality for languages/ecosystems is the ability for an outsider to drop in with general Linux/Docker knowledge and be able to package things up in a sometimes unusual way. And until uv, Python absolutely failed in this regard.

Having a directory like node_modules containing the dependencies is such an obviously good choice, it's sad how Python steering council actively resists this with what I find odd arguments.

I think a lot of the decades old farce of Python package management would have been solved by this.

https://peps.python.org/pep-0582/

https://discuss.python.org/t/pep-582-python-local-packages-d...

It's literally what a venv does and it is very widespread to just make a venv per project, just like you are creating a node_modules per project.

This is not a good idea: this leads to longer build times and/or invalid builds (you build against different dependencies than declared in config).

Having dependency cache and build tool that knows where to look for it is much superior solution.

(p)npm manages both fine with the dependency directory structure.

This is literally not possible.

If you have local dependency repo and dependency manifest, during the build, you can either:

1. Check if local repo is in sync - correct build, takes more time

2. Skip the check - risky build, but fast

If the dependencies are only in the cache directory, you can have both - correct and fast builds.

I don't follow. In pnpm there's a global cache at ~/.pnpm with versioned packages and node_modules has symlinks to those. Dependencies are defined in package.json transitive dependencies are versioned and SHA512-hashed in pnpm-lock.yaml.

E.g.

  $ ls -l ./node_modules/better-sqlite3
  ... node_modules/better-sqlite3 -> .pnpm/better-sqlite3@12.4.1/node_modules/better-sqlite3

You still need to have those symlinks checked. For example you switch branch to one with updated package.json, now you need either to check symlinks or you risk to have incorrect build.

Introducing a directory that needs to stay in sync with dependency manifest will always lead to such problems. It is good that Python developers do not want to repeat such mistake.

Just run `pnpm install` after switching the branch. Or add `pnpm install` into the build step. And many build tools will do that automatically. If the deps are in sync with the manifest, that takes typically less than a second.

This is a problem I've never encountered in practice. And it's not like you don't have to update the dependencies in Python if they are different per-branch.

> This was always my issue with pip and venv: I don’t want a thing that hijacks my terminal and PATH ...

What's the "this" that is supposedly always your issue? Your comment is phrased as if you're agreeing with the parent comment but I think you actually have totally different requirements.

The parent comment wants a way to have Python packages on their computer that persist across projects, or don't even have a notion of projects. venv is ideal for that. You can make some "main" venv in your user directory, or a few different venvs (e.g. one for deep learning, one for GUIs, etc.), or however you like to organise it. Before making or running a script, you can activate whichever one you prefer and do exactly like parent commenter requested - make use of already-installed packages, or install new ones (just pip install) and they'll persist for other work. You can even switch back and forth between your venvs for the same script. Totally slapdash, because there's no formal record of which scripts need which packages but also no ceremony to making new code.

Whereas your requirements seem to be very project-based - that sounds to me like exactly the opposite point of view. Maybe I misunderstood you?

    > Python/pip though… Every time I need to containerize or setup a Python project for some arbitrary task, there’s always an issue with “Your Linux distro doesn’t support that version of Python anymore” [...]
How are you containerizing Python projects? What confuses me about your statement are the following things:

(1) How old must the Python version of those projects be, to not be supported any longer with any decent GNU/Linux distribution?

(2) Are you not using official Python docker images?

(3) What's pip gotta do with a Python version being supported?

(4) How does that "Your Linux distro doesn’t support that version of Python anymore" show itself? Is that a literal error message you are seeing?

    > [...] demanding new —yes-destroy-my-computer-dangerously-and-step-on-my-face-daddy flags and crashing my automated scripts from last year
It seems you are talking about installing things in system Python, which you shouldn't do. More questions:

(1) Why are you not using virtual environments?

(2) You are claiming Node.js projects to be better in this regard, but actually they are just creating a `node_modules` folder. Why then is it a problem for you to create a virtual environment folder? Is it merely, that one is automatic, and the other isn't?

    > This was always my issue with pip and venv: I don’t want a thing that hijacks my terminal and PATH, flips my world upside down and makes writing automated headless scripts and systemd services a huge pain.
It is very easy to activate a venv just for one command. Use a subshell, where you `. venv/bin/activate && python ...(your program invocation here)...`. Aside from that, projects can be set up so that you don't even see that they are using a venv. For example I usually create a Makefile, that does the venv activating and running and all that for me. Rarely, if ever, I have to activate it manually. Since each line in a Makefile target is running in its own shell, nothing ever pollutes my actual top level shell.

> (1) How old must the Python version of those projects be, to not be supported any longer with any decent GNU/Linux distribution?

Debian-13 defaults to Python-3.13. Between Python-3.12 and Python-3.13 the support for `pkg_config` got dropped, so pip projects like

https://pypi.org/project/remt/

break. What I was not aware of: `venv`s need to be created with the version of python they are supposed to be run. So you need to have a downgraded Python executable first.

> What I was not aware of: `venv`s need to be created with the version of python they are supposed to be run. So you need to have a downgraded Python executable first.

This is one of uv’s selling points. It will download the correct python version automatically, and create the venv using it, and ensure that venv has your dependencies installed, and ensure that venv is active whenever you run your code. I’ve also been bit by the issue you’re describing many times before, and previously had to use a mix of tools (eg pyenv + pipenv). Now uv does it all, and much better than any previous solution.

> (2) Are you not using official Python docker images?

Would you help me make it work?

  docker run -it --rm -v$(pwd):/venv --entrypoint python python:3.12-alpine -m venv /venv/remt-docker-venv
How do I source it?

  cd remt-docker-venv/
  source bin/activate
  python --version
  bash: python: command not found

Instead of "python --version", just use the "python" executable from within the venv. Sourcing is a concept for interactive shells.

The python executable from the venv is not going to work inside the container as it's a symlink by default. That's a venv that was built on their host OS, and the symlink to the Python binary on the container is not going to work.

You could also pass the `--copies` parameter when creating the initial venv, so it's a copy and not symlinks, but that is not going to work if your on MacOS or Windows (because the binary platform is different to the Linux that's running the container), or if your development Python is built with different library versions than the container you're starting.

I'd recommend re-creating the virtual environment inside the Docker container.

The problem is you are mounting a virtual environment you have built in your development environment into a Docker container. Inside your virtual environment there's a `python` binary that in reality is a symlink to the python binary in your OS:

  cd .venv
  ls -l bin/python
  lrwxr-xr-x@ 1 myuser  staff  85 Oct 29 13:13 bin/python -> /Users/myuser/.local/share/uv/python/cpython-3.13.5-macos-aarch64-none/bin/python3.13
So, when you mount that virtual environment in a container, it won't find the path to the python binary.

The most basic fix would be recreating the virtual environment inside the container, so from your project (approximately, I don't know the structure):

   docker run -it --rm -v$(pwd):/app --entrypoint ash ghcr.io/astral-sh/uv:python3.12-alpine
  / # cd /app
  /app # uv pip install --system -r requirements.txt
  Using Python 3.12.12 environment at: /usr/local
  Resolved 23 packages in 97ms
  Prepared 23 packages in 975ms
  Installed 23 packages in 7ms
  [...]
  /app # python
  Python 3.12.12 (main, Oct  9 2025, 22:34:22) [GCC 14.2.0] on linux
  Type "help", "copyright", "credits" or "license" for more information.
But, if you're developing and don't wanna build the virtual environment each time you start the container, you could create a cache volume for uv, and after the first time installation, everything is going to be way faster:

  # First run
   docker run -ti --rm --volume .:/app --volume uvcache:/uvcache -e UV_CACHE_DIR="/uvcache" -e UV_LINK_MODE="copy" --entrypoint ash ghcr.io/astral-sh/uv:python3.12-alpine
  / # cd /app
  /app # uv pip install -r requirements.txt --system
  Using Python 3.12.12 environment at: /usr/local
  Resolved 23 packages in 103ms
  Prepared 23 packages in 968ms
  Installed 23 packages in 16ms
  [...]
  # Second run
   docker run -ti --rm --volume .:/app --volume uvcache:/uvcache -e UV_CACHE_DIR="/uvcache" -e UV_LINK_MODE="copy" --entrypoint ash ghcr.io/astral-sh/uv:python3.12-alpine
  / # cd /app
  /app # uv pip install -r requirements.txt --system
  Using Python 3.12.12 environment at: /usr/local
  Resolved 23 packages in 10ms
  Installed 23 packages in 21ms
You can also see some other examples, including a Docker Compose one that automatically updates your packages, here:

https://docs.astral.sh/uv/guides/integration/docker/#develop...

---

Edit notes:

  - UV_LINK_MODE="copy" is to avoid a warning when using the cache volume
  - Creating the venv with `--copies` and mounting it into the container would fail 
    if your host OS is not exactly the same as the containers, and also defeats in a 
    way the use of a versioned Python container

> demanding new —yes-destroy-my-computer-dangerously-and-step-on-my-face-daddy flags and crashing my automated scripts from last year.

Literally, my case. I recently had to compile an abandoned six-year-old scientific package written in C with Python bindings. I wasn’t aware that modern versions of pip handle builds differently than they did six years ago — specifically, that it now compiles wheels within an isolated environment. I was surprised to see a message indicating that %package_name% was not installed, yet I was still able to import it. By the second day, I eventually discovered the --no-build-isolation option of pip.

For not having to call 'source ...' in a Dockerfile, if you use the python executable from the virtualenv directly, then it will be as if you've activated that virtualenv.

This works because of the relative path to the pyenv.cfg file.

The way to activate a virtual environment in a docker container is to export modified PATH and possibly change PYTHONHOME.

I think my ultimate problem with venv is that virtual environments are solved by Docker. Sure sure, full time Python devs need a way to manage multiple Python and package versions on their machine and that’s fine. But whatever they need has to not get in my way when I come in to do DevOps stuff. If my project needs a specific version of Node, I don’t need nvm or n, I just install the version I want in my Dockerfile. Same with Go, same with most languages I use.

Python sticks out for having the arrogance to think that it’s special, that “if you’re using Python you don’t need Docker, we already solved that problem with venv and conda”. And like, that’s cute and all, but I frequently need to package Python code and code in another language into one environment, and the fact that their choice for “containerizing” things (venv/conda) plays rudely with every other language’s choice (Docker) is really annoying.

Then use a Docker container that has the right Python version already? There are official containers for that.

If that's not good enough for you, you could do some devops stuff and build a docker container in which you compile Python.

I don't see where it is different from some npm project. You just need to use the available resources correctly.

I dont understand why you can't just install python in your container? How does venv make it hard?

> I don’t want a thing that hijacks my terminal and PATH, flips my world upside down and makes writing automated headless scripts and systemd services a huge pain.

pip and venv are not such things. The activation script is completely unnecessary, and provided as a convenience for those to whom that workflow makes more sense.

> Every time I need to containerize or setup a Python project for some arbitrary task, there’s always an issue with “Your Linux distro doesn’t support that version of Python anymore“

I can't fathom why. First off, surely your container image can just pin an older version of the distro? Second, right now I have Python versions 3.3 through 3.14 inclusive built from source on a very not-special consumer Linux distro, and 2.7 as well.

> and triggering an avalanche of new “you really shouldn’t install packages globally” messages, demanding new —yes-destroy-my-computer-dangerously-and-step-on-my-face-daddy flags and crashing my automated scripts from last year.

Literally all you need to do is make one virtual environment and install everything there, which again can use direct paths to pip and python without sourcing anything or worrying about environment variables. Oh, and fix your automated scripts so that they'll do the right thing next time.

> I know that if I was a full-time Python dev there’s a “better way” that I’d know about.

Or, when you get the "you really shouldn't install packages globally" message, you could read it — as it gives you detailed instructions about what to do, including pointing you at the documentation (https://peps.python.org/pep-0668/) for the policy change. Or do a minimum of research. You found out that venvs were a thing; search queries like "python venv best practices" or "python why do I need a venv" or "python pep 668 motivation" or "python why activate virtual environment" give lots of useful information.

> I don’t want a thing that hijacks my terminal and PATH

The shame is ... it never had to be that way. A venv is just a directory with a pyvenv.cfg, symlinks to an interpreter in bin, and a site-packages directory in lib. Running anything with venv/bin/python _is_ running in the virtual environment. Pip operations in the venv are just venv/bin/python -m pip ... . All the source/deactivate/shell nonsense obfuscating that reality did a disservice to a generation of python programmers.

> The shame is ... it never had to be that way.

It isn't that way. Nothing is preventing you from running the venv's python executable directly.

But the original designer of the concept appears to have thought that activation was a useful abstraction. Setting environment variables certainly does a lot to create the feeling of being "in" the virtual environment.

Conda is open source. Not sure what you mean about an EULA. There are some license agreements if you use Anaconda, but if you just use conda-forge you don't have any entanglements with Anaconda the company. (I agree the nomenclature is confusing.)

I… I’m sorry to hear that. Wow. That is shockingly bad.

Seriously, this is why we have trademarks. If Anaconda and Conda (a made-up word that only makes sense as a nickname for Anaconda and thus sounds like it’s the same thing) are two projects by different entities, then whoever came second needs to change their name, and whoever came first should sue them to force them. Footguns like this should not be allowed to exist.

It's not like they're entirely separate and unrelated things. Anaconda is a company that created a program called Conda which can connect to various "channels" to get packages, and initially the main one was the Anaconda channel. Conda was open source but initially its development was all done by Anaconda. Gradually the Conda program was separated out and development was taken over by a community team. Also there is now conda-forge which is a community-run channel that you can use instead of the Anaconda one. And then there is also Mamba which is basically a faster implementation of Conda. That's why there's the awkward naming. It's not like there are competing groups with similar names, it's just things that started off being named similarly because they were built at one company, but gradually the pieces got separated off and moved to community maintenance.

Next to the Anaconda/conda/mamba, you forgot micromamba.

Anaconda suddendly increased the licensing fees like Broadcom did with VMWare, many companies stopped using it because of the sudden increase in costs.

https://blog.fulcrumgenomics.com/p/anaconda-licensing-change... https://www.theregister.com/2024/08/08/anaconda_puts_the_squ...

Conda was made by Anaconda, there's no one to sue, chromium vs Chrome