LOL, python is plenty fast if you make sure it calls C or Rust behind the scenes. Typical of 'professional' python people. Something too slow? just drop into C. It surely sounds weird to everyone who complains about Python being slow and the response is on these lines.

But that’s the whole point of it. You have the option to get that speed when it really matters, but can use the easier dynamic features for the very, very many use cases where that’s appropriate.

This is an eternal conversation. Years ago, it was assembler programmers laughing at inefficient C code, and C programmers replying that sometimes they don’t need that level of speed and control.

You are correct. However it took about only about 10 years for C compilers to beat hand assembly (for the average programmer), thus proving the naysayers wrong.

Meanwhile Python is just as slow today as it was 30 years ago (on the same machine).

People really misconstrue the relationship between Python and C/C++ in these discussions.

Those libraries didn't spring out of thin air, nor were they ever existing.

People wanted to write and interface in python badly, that's why you have all these libraries with substantial code in another language yet research and development didn't just shift to that language.

TensorFlow is a C++ library with a python wrapping. Pytorch has supported C++ interface for some time now, yet virtually nobody actually uses tensorflow or pytorch in C++ for ML R&D.

If python was fast enough, most would be fine, probably even happy to ditch the C++ backends and have everything in python, but the reverse isn't true. The C++ interface exists, and no-one is using it. C++ is the replaceable part of this equation. Nobody would really care if Rust was used instead.

Even as a Fortran programmer, the majority of my flops come from BLAS, LAPACK, and those sort of libraries… putting me in the exact same boat as the Python programmers, really. The “professional” programmers in general don’t worry too much about tying their identities to language choices, I think.

This is a very common pattern in high level languages and has been a thing ever since Perl had first come onto the scene. The whole point was that you use more ergonomic, easier to iterate languages like Perl or Python for most of your logic and you drop down into C, C++, Zig, or Rust to write the performance sensitive portions of your code.

When compiled languages became popular again in the 2010s there was a renewed effort into ergonomic compiled languages to buck this trend (Scala, Kotlin, Go, Rust, and Zig all gained their popularity in this timeframe) but there's still a lot of code written with the two language pattern.

And then someone needs to cross FFI border multiple times and gained perf is hurting again.

If what one's doing in scientific computing needs to cross the FFI border multiple times, they're doing it wrong...

This assumes the boundary between Python and the native code is clean and rarely crossed.