If software development taught me anything it is that everything that can go wrong will go wrong, the impossible will happen. As a result I prefer having less things that can go wrong in the first place.

Since I acknowledge my own fallibility and remote possibilities of bad things happening I have come to prefer reliability above everything else. I don't want a bucket that leaks from a thousand holes. I want the leaks to be visible and in places I am aware of and where I can find and fix them easily. I am unable to write C code to that standard in an economical fashion, which is why I avoid C as much as possible.

This is, perhaps surprisingly, what I consider the strength of C. It doesn't hide the issues behind some language abstraction, you are in full control of what the machine does. The bug is right there in front of you if you are able to spot it (given it's not hiding away in some 3rd party library of course) which of course takes many years of practice but once you have your own best practices nailed down this doesn't happen as often as you might expect.

Also, code doesn't need to be bulletproof. When you design your program you also design a scope saying this program will only work given these conditions. Programs that misbehaves outside of your scope is actually totally fine.

Empirically speaking, programmers as a whole are quite bad at avoiding such bugs. Humans are fallible, which is why I personally think it's good to have tools to catch when we make mistakes. One man's "this takes control away from the programmer" is another man's "friend that looks at my work to make sure it makes sense".

How is one in full control of SIMD and CPU/OS scheduling in NUMA architecures in C?

Linux has libnuma (https://man7.org/linux/man-pages/man3/numa.3.html) while Windows has its own NUMA api (https://learn.microsoft.com/en-us/windows/win32/procthread/n...)

For CPU/OS scheduling, use pthreads/OpenMP apis to set processor affinity for threads.

For SIMD, use compiler intrinsics.

Nothing of that is written in pure C, as per ISO C standard.

Rather they rely on a mix of C compiler language extensions, inline or external Assembly written helpers functions, which any language compiled language also has available, when going out of the standard goes.

I think you are being nitpicky here.

When most people say "I write in C", they don't mean abstract ISO C standard, with the possibility of CHAR_BIT=9. They mean "C for my machine" - so C with compiler extensions, assumptions about memory model, and yes, occasional inline assembly.

I am, because people making C something special that it isn't.

Other languages share the same features.

That is not an argument. ANSI/ISO C standardizes hardware-independent parts of the language but at some point you have to meet the hardware. The concept of a "implementation platform" (i.e. cpu arch + OS + ABI) is well known for all language runtimes.

All apps using the above-mentioned are written in standard ANSI/ISO C. The implementation themselves are "system level" code and hence have Language/HW/OS specific extensions which is standard practice when interfacing with low-level code.

> any language compiled language also has available

In theory yes, but in practice never to the ease nor flexibility with which you can use C for the job. This is what people mean when they say "C is close to the metal" or "C is a high-level assembly language".

It is, because C is nothing special, those features are available in other languages.

Proven before C was even a dream at AT&T, and by all other OS vendors outside Bell Labs using other systems languages.

Then people get to argue C can X, yeah provided it is the Compiler XYZ C dialect.

Not quite.

C took off because system programmers could not do with other languages what they wanted, with the ease and flexibility that C offered.

Having a feature in a language is not the same as how easy it is to span hardware, OS and application in the same language and runtime.

Not really.

C took off because it was free, shipped alongside with an operating system that initially was available for a symbolic price, as AT&T was forbidden to take advantage of UNIX.

Had UNIX been a commercial operating system, with additional licenses for the C compiler, like every other operating systems outside Bell Labs, we would not be even talking about C in 2026.

Being easily affordable/available in those times was the initial "hook" but C's subsequent and sustained success was due to a happy confluence of various design decisions.

Not too high-level, Not too low-level, easy access to memory/ISA, simple abstract machine, being imperative procedural, spanning bare-metal/OS/app, adopted by free software movement producing free compilers/tools, becoming de-facto industry standard ABI etc. all were crucial in its rise to power.

Note that its main competitor at that time, Pascal; lost out in spite of being simpler, having clean high-level features, promoted by academia, safety focused etc.

As Dennis Ritchie himself said in "The Development of the C Language" (https://www.nokia.com/bell-labs/about/dennis-m-ritchie/chist...);

C is quirky, flawed, and an enormous success. While accidents of history surely helped, it evidently satisfied a need for a system implementation language efficient enough to displace assembly language, yet sufficiently abstract and fluent to describe algorithms and interactions in a wide variety of environments.