Why does "good" C have to be zero alloc? Why should "nice" javaesque make little sense in C? Why do you implicitly assume performance is "efficient problem solving"?
Not sure why many people seem fixated on the idea that using a programming language must follow a particular approach. You can do minimal alloc Java, you can simulate OOP-like in C, etc.
Unconventional, but why do we need to restrict certain optimizations (space/time perf, "readability", conciseness, etc) to only a particular language?
Because in C, every allocation incurs a responsibility to track its lifetime and to know who will eventually free it. Copying and moving buffers is also prone to overflows, off-by-one errors, etc. The generic memory allocator is a smart but unpredictable complex beast that lives in your address space and can mess your CPU cache, can introduce undesired memory fragmentation, etc.
In Java, you don't care because the GC cleans after you and you don't usually care about millisecond-grade performance.
No. Look up Arenas. In general group allocations to avoid making a mess.
If you send a task off to a work queue in another thread, and then do some local processing on it, you can't usually use a single Arena, unless the work queue itself is short lived.
I don't see how arenas solve the problems.
You group things from the same context together, so you can free everything in a single call.
No. Arenas are not a general case solution. Look it up
> Why should "nice" javaesque make little sense in C?
Very importantly, because Java is tracking the memory.
In java, you could create an item, send it into a queue to be processed concurrently, but then also deal with that item where you created it. That creates a huge problem in C because the question becomes "who frees that item"?
In java, you don't care. The freeing is done automatically when nobody references the item.
In C, it's a big headache. The concurrent consumer can't free the memory because the producer might not be done with it. And the producer can't free the memory because the consumer might not have ran yet. In idiomatic java, you just have to make sure your queue is safe to use concurrently. The right thing to do in C would be to restructure things to ensure the item isn't used before it's handed off to the queue or that you send a copy of the item into the queue so the question of "who frees this" is straight forward. You can do both approaches in java, but why would you? If the item is immutable there's no harm in simply sharing the reference with 100 things and moving forward.
In C++ and Rust, you'd likely wrap that item in some sort of atomic reference counted structure.
> Why does "good" C have to be zero alloc?
GP didn't say "zero-alloc", but "minimal alloc"
> Why should "nice" javaesque make little sense in C?
There's little to no indirection in idiomatic C compared with idiomatic Java.
Of course, in both languages you can write unidiomatically, but that is a great way to ensure that bugs get in and never get out.
In C, direct memory control is the top feature, which means you can assume anyone who uses your code is going to want to control memory through the process. This means not allocating from wherever and returning blobs of memory, which means designing different APIs, which is part of the reason why learning C well takes so long.
I started writing sort of a style guide to C a while ago, which attempts to transfer ideas like this one more by example:
https://github.com/codr7/hacktical-c
Echoing my sibling comment - thanks for sharing this.
Thanks for sharing this work.
It sounds weird. If I write Python code with minimal side effects like in Haskell, wouldn't it at least reduce the possibility of side-effect bugs even though it wasn't "Pythonic"?
AFAIK, nothing in the language standard mentions anything about "idiomatic" or "this is the only correct way to use X". The definition of "idiomatic X" is not as clear-cut and well-defined as you might think.
I agree there's a risk with an unidiomatic approach. Irresponsibly applying "cool new things" is a good way to destroy "readability" while gaining almost nothing.
Anyway, my point is that there's no single definition of "good" that covers everything, and "idiomatic" is just whatever convention a particular community is used to.
There's nothing wrong with applying an "unidiomatic" mindset like awareness of stack/heap alloc, CPU cache lines, SIMD, static/dynamic dispatch, etc in languages like Java, Python, or whatever.
There's nothing wrong either with borrowing ideas like (Haskell) functor, hierarchical namespaces, visibility modifiers, borrow checking, dynamic dispatch, etc in C.
Whether it's "good" or not is left as an exercise for the reader.
> Why does "unidiomatic" have to imply "buggy" code?
Because when you stray from idioms you're going off down unfamiliar paths. All languages have better support for specific idioms. Trying to pound a square peg into a round hole can work, but is unlikely to work well.
> You're basically saying an unidiomatic approach is doomed to introduce bugs and will never reduce them.
Well, yes. Who's going to reduce them? Where are you planning to find people who are used to code written in an unusual manner?
By definition alone, code is written for humans to read. If you're writing it in a way that's difficult for humans to read, then of course the bug level can only go up and not down.
> It sounds weird. If I write Python code with minimal side effects like in Haskell, wouldn't it at least reduce the possibility of side-effect bugs even though it wasn't "Pythonic"?
"Pythonic" does not mean the same thing as "Idiomatic code in Python".
Good C has minimal allocations because you, the human, are the memory allocator. It's up to your own meat brain to correctly track memory allocation and deallocation. Over the last century, C programmers have converged on some best practices to manage this more effectively. We statically allocate, kick allocations up the call chain as far as possible. Anything to get that bit of tracked state out of your head.
But we use different approaches for different languages because those languages are designed for that approach. You can do OOP in C and you can do manual memory management in C#. Most people don't because it's unnecessarily difficult to use languages in a way they aren't designed for. Plus when you re-invent a wheel like "classes" you will inevitably introduce a bug you wouldn't have if you'd used a language with proper support for that construct. You can use a hammer to pull out a screw, but you'd do a much better job if you used a screwdriver instead.
Programming languages are not all created equal and are absolutely not interchangeable. A language is much, much more than the text and grammar. The entire reason we have different languages is because we needed different ways to express certain classes of problems and constructs that go way beyond textual representation.
For example, in a strictly typed OOP language like C#, classes are hideously complex under the hood. Miles and miles of code to handle vtables, inheritance, polymorphism, virtual, abstract functions and fields. To implement this in C would require effort far beyond what any single programmer can produce in a reasonable time. Similarly, I'm sure one could force JavaScript to use a very strict typing and generics system like C#, but again the effort would be enormous and guaranteed to have many bugs.
We use different languages in different ways because they're different and work differently. You're asking why everyone twists their screwdrivers into screws instead of using the back end to pound a nail. Different tools, different uses.