For me uv seems to solve some of the worst pain points of Python, which is great since I have to work with it. I think for a lot of people the hate comes in when they have to maintain or deploy Python code in scenarios that Python and its libraries wasn't designed to do. Some stuff just makes Python seem like an "unserious" programming language to me:

1. Installation & dependencies: Don't install Python directly, instead install pyenv, use pyenv to install python and pip, use pip to install venv, then use venv to install python dependencies. For any non-trivial project you have to be incredibly careful with dependency management, because breaking changes are extremely common.

2. Useless error messages: I cannot remember outside of trivial examples without external packages when the error message I got was actually directly pointing towards the issue in the code. To give a quick example (pointing back to the point above), I got the error message "ImportError: cannot import name 'ChatResponse' from 'cohere.types'". A quick google search reveals that this happens if a) the cohere API-Key isn't set in ENV or b) you use langchain-cohere 0.4.4 with cohere 5.x, since the two aren't compatible.

3. Undisciplined I/O in libraries: Another ML library I recently deployed has a log-to-file mode. Fair enough, should be disabled before k8s deployment no biggie. Well, the library still crashes because it checks if it has rwx-permissions on a dir it doesn't need.

4. Type conversions in C-interop: Admittedly I was also on the edge of my own capabilities when I dealt with these issues, but we had issues with large integers breaking when using numpy/pandas in between to do some transforms. It was a pain to fix, because Python makes it difficult to understand what's in a variable, and what happens when it leaves Python.

1. and 4. are mainly issues with people doing stuff in Python it wasn't really designed to do. Using Python as a scripting language or a thin (!) abstraction layer over C is where it really shines. 2. and 3. have more to do with the community, but it is compounded by bad language design.

1. is true, yup people have been ragging on the python install superfund site problem for years, but the rest of those are entirely 3rd party library issues. It's like saying Windows is not a serious operating system because you installed a buggy application.

2. I've used a ton of languages and frankly Python has the best tracebacks hands-down, it's not even close. It's not Python's fault a 3rd party library is throwing the wrong error.

3. Again, why is bad language design a library can do janky things with I/O?

4. FFI is tricky in general, but this sounds like primarily a "read the docs" problem. All of the major numeric acceleration libraries have fixed sized numbers, python itself uses a kind of bigint that can be any size. You have to stay in the arrays/tensors to get predictable behavior. This is literally python being "a thin abstraction layer over C."

I'm deliberately not differentiating between the language, the tool-chain, the libraries and the community. They are all closely connected, and in the end you're always buying into the bundle.

2. I would argue that the ubiquity of needing stack traces in Python is the main problem. Why are errors propagating down so deep? In Rust I know I'm in trouble when I am looking at the stack trace. The language forces you to handle your errors, and while that can feel limiting, it makes writing correct and maintainable code much more likely. Python lets you be super optimistic about the assumptions of your context and data - which is fine for prototyping, but terrible for production.

3. I agree that this isn't directly a language design issue, but there's a reason I feel the pain in Python and not in Rust or Java. Dynamic typing means you don't know what side effects a library might have until runtime. But fundamentally it is a skill issue. When I deploy the code of Java, Go or Rust people, they generally know I/O is important, and spent the necessary time thinking about it. JS, Python or Ruby devs don't.

4. The issue is that Python's integer handling sets an expectation that numbers "just work," and then that expectation breaks at the FFI boundary. And once you're of the trodden path, things get really hard. The cognitive load of tracking which numeric type you're in at any moments sucks. I completely agree that this was a skill issue on my part, but I am quite sure that I would not have had that problem if it was in a properly type-set, compiled language.

I do think some module writers get overexcited about using some dynamic features and it can be hard to understand what's going on. On the other hand....

Dynamic languages let you do a lot of things with meta classes and monkey-patching that allow you to use a library without needing to build your own special version of it. With some C or C++ library that did something bad (like the logging one you mentioned) there's nothing for it but to rebuild it. With Python you may well be able to monkey patch it and live to fight another day.

It is great when you're dealing with 3rd party things that you either don't have the source code for or where you cannot get them to accept a patch.

“Python lets me do horrifying things no one should ever do in production code” isn’t the flex you think it is.

Why should no-one ever do them? They're useful. :-) FastAPI uses this stuff to make itself easier to use for example. They're things I would have killed for when I was writing C++.

> Dynamic typing means you don't know what side effects a library might have until runtime.

Static typing (in most industrially popular languages) doesn't tell you anything about side effects, only expected inputs and return values.