It's not just about library availability. Python wins because it lets you offload the low-level performance work to people who really know what they’re doing. Libraries like NumPy or PyTorch / Keras wrap highly optimized C/C++ code—so you get near-C/++ performance without having to write or debug C yourself, and without needing a whole computer science degree to do so properly.

It's a mistake to assume C is always faster. If you don’t have a deep understanding of memory layout, compiler flags, vectorization, cache behavior, etc. your hand-written C code can easily be slower than high-level Python using well-optimized libraries. See [1] for a good example of that.

Sure, you could call those same libs from C, but then you're reinventing Python's ecosystem with more effort and more chances to shoot yourself in the foot. Python gives you access to powerful, low-level tools while letting you focus on higher-level problems—in a language that’s vastly easier to learn and use.

That tradeoff isn't just convenience—it's what makes modern AI R&D productive at scale.

[1] https://stackoverflow.com/questions/41365723/why-is-my-pytho...

I feel like you're re-stating the same claim that crote made that there's a clean cut between python and lower level libraries meaning that the user doesn't need to know what is happening at the lower level to achieve good performance. This is not true in many cases if you are aiming to achieve peak performance - which we should be for training and serving AI systems since they are already so resource hungry.

it isnt a claim, its an emperical fact for which ive provided an example. The fact me and another user made similar comments indepently just goes to show how realistic this viewpoint is.

It's not empirical fact, if you look at the code for AI frameworks you will see that this isn't true in practice when you go beyond a single isolated matrix multiplication

ok, have an amateur implement a hand-written Fourier transform in C, and have it beat numpy's implementation. There, now you have two examples, and there are loads more, like image and signal processing in general, handling data frame operations like grouping / joining things, big integer arithmetic / cryptography in general, just plain old sorting, etc.

Amateurs and even some folks who have worked with these things for a while won't beat off the shelf python calls to highly optimized and well constructed libraries.

I think we're talking past each other. I'm not suggesting that most users should be writing individual computational ops themselves in C. They'll certainly be unlikely to match the perf of expert written C that has had time invested in it. The point I'm trying to make is that when use a framework for modern AI you're not just calling an individual op or many individual ops in isolation. It matters how multiple dependent ops are sequenced along with other code, e.g. data loading code (that may be custom application specific, so not available in a library). My argument is that it may be easier to reach peak performance on your hardware if that framework code was all written in a lower level language.

Ah, i see - for production or most real-time systems you'd be right imo but the time taken to complete most tasks in a conventional ai development environment means the overhead from python moving from lib to lib becomes neigible, no?