Instead of “harmful to performance”, why can’t we say “slow”?

Harmful should be reserved for things that affect security or privacy e.g. accidentally encourage bugs like goto does.

"Considered harmful" is a meme they're referencing but yeah...its pretty stale at this point.

To me it’s not a meme, it’s a reference to a very famous letter by dijkstra regarding goto statements.

https://en.wikipedia.org/wiki/Considered_harmful

You want to object because of a misunderstanding. The usage of the word meme here is correct in its original sense. The word cliche would also work.

That is the meme.

Not really "goto statements" so much as the go-to arbitrary control flow semantic aka jump.

C's goto is a housecat to the full blown jump's tiger. No doubt an angry housecat is a nuisance but the tiger is much more dangerous.

C goto won't let you jump straight into the middle of unrelated code, for example, but the jump instruction has no such limit and neither did the feature Dijkstra was discussing.

Only once we convince C developers that a lack of performance isn't inherently harmful.

More like Python / JS devs

C devs are the few I've met that seem to actually care.

A language community which so prizes the linked list is in no position to go throwing such stones.

Linux lucked out, when you're doing tricky wait free concurrent algorithms that intrusive linked list you hand designed was a good choice. But over in userland you'll find another hand rolled list in somebody's single threaded file parser and oh, the growable array would be fifty times faster, shame the C programmer doesn't have one in their toolbox.

I think you misunderstood. That's exactly the problem. C developers consider slow performance harmful, which is often dumb.

Except that's why I use hundreds of C programs every day, but complain about the few python programs and all the sloppy websites.

You do you. Most people don't care about software that much in general. The most important thing is that it does the job and it does it securely. C won't help you with bugs in any shape or form (in fact it's famously bug-friendly), so it often makes more sense to use a tech stack that either helps with those or lowers the cost on the developer side.

People care about the performance. There are numerous studies about that, showing, for instance a direct correlation between how fast a page loads and conversion rate. Also, Chrome, initially, the pitch was almost all about performance, and it was. They only became complacent once they got their majority market share.

It makes sense to use a tech stack that lowers the cost on the developer side in the same way that it makes sense to make junk food. Why produce good, tasty food when there is more money do be made by just selling cheap stuff, it does the most important thing: give people calories without poisoning them (short term).

Yeah but we're mentioning the performance of the language. People do have a baseline level of accepted performance, but this is about perceived performance and if software feels slow most of the time it's just because of some dumb design. Like a decision to show an animated request to sign up for the newsletter on the first visit. Or loading 20 high quality images in a grid view on top of the page. Or just in general choosing animations that just feel slow even though they're hitting the FPS target perfectly without hiccups.

Get rid of those dumb decisions and it could have been pure JS and be 100% fine. C has no value here. The slow performance of JS is not harmful here. Discord is fast enough although it's Electron. VS Code is also fast enough.

But I'd also like to respond to the food analogy, since it's funny.

Let's say that going full untyped scripting language would be the fast food. You get things fast, it does the job, but is unhealthy. You can write only so much bash before throwing up.

Developing in C is like cooking for those equally dumb expensive unsustainable restaurants which give you "an experience" instead of a full healthy meal. Sure, the result uses the best ingredients, it's incredibly tasty but there's way too little food for too much cost. It's bad for the economy (the money should've been spent elsewhere), bad for the customer (same thing about money + he's going to be hungry!) and bad for the cook (if he chose a different job, he'd contribute to the society in better ways!) :D

Just go for something in the middle. Eat some C# or something.

externalising developer cost onto runtime performance only makes sense if humans will spend more time writing than running (in aggregate).

Essentially you’re telling me that the software being made is not useful to many people; because the cost of writing the software (a handful of developers) will spend more time writing the software than their userbase will in executing their software.

Otherwise you’re inflicting something on humanity.

Dumping toxic waste in a river is much cheaper than properly disposing of it too; yet we understand that we are causing harm to the environment and litigate people who do that.

Slow software is fine in low volumes (think: shitting in the woods) but dumping it on huge numbers of users by default is honestly ridiculous (Teams, I’m looking at you: with your expectation to run always and on everyones machine!)

> Most people don't care about software that much in general.

This is an example of not caring about the software per se, but only about the outcome.

> [C is] in fact it's famously bug-friendly

Yes, but as a user I like that. I have a game that from the user-experience seams to have tons of use-after-free bugs. You see that as a user, as strings shown in the UI suddenly turn to garbage and then change very fast. Even with such fatal bugs, the program continues to work, which I like as a user, since I just want to play the game, I don't care if the program is correct. When I want to get rid of these garbage text, I simply close the in-game window and reopen it and everything is fine.

On the other side there are games written in Pascal or Java, which might not have that much bugs, but every single null pointer exception is fatal. This led to me not playing the games anymore, because being good and then having the program crash is so frustrating. I rather have it running a bit longer with silent corruption.

A null-pointer dereference in C will be just as fatal (modulo optimizations).

I think people also care that software runs reasonably quickly. Among non-technical people, "my Windows is slow" seems to be a common complaint.

Sure, but this is perceived performance and it's 100% unrelated to the language. It's bugs, I/O, telemetry, updates, ads, other unnecessary background things, or just dumb design (e.g. showing onedrive locations first when trying to save a file in Word) in general.

C won't help with any of that. Unless the cost of development using it will scare away management which requests those dumb features. Fair enough then :)

> or just dumb design (e.g. showing onedrive locations first when trying to save a file in Word)

Your example is not one of a 'dumb' design, it is a deliberate 'dark pattern' --> pushing you to use OneDrive as much as possible so that to earn more money.

> The most important thing is that it does the job and it does it securely

ROTFL. Is there any security audit ? /s

it does the job - mostly.

maybe its not 'slow' but more 'generalized for a wide range of use-cases'? - because is it really slow for what it does, or simply slower compared to a specialized implementation? (this is calling a regular person car slow compared to an F1 car... sure the thing is fast but good luck takin ur kids on holiday or doing weekly shopping runs?)

glibc is faster in basically every usecase, though.

“Generalised to a wide range of use cases” is a really strange way to say “unsuitable to most multi-threaded programs”.

In 2025 an allocator not cratering multi-threaded programs is the opposite of specialisation.

It only matters when your threads allocate with such a high frequency that they run into contention.

A too high access frequency to a shared resource is not a "general case", but simply poorly designed multithreaded code (but besides, a high allocation frequency through the system allocator is also poor design for any single-threaded code, application code simply should not assume any specific performance behaviour from the system allocator).

Well, what is "such a high frequency"? Different allocators have different breaking points, and the musl's one is apparently very low.

> application code simply should not assume any specific performance behaviour from the system allocator

Technically, yes. Practically, no; that's why e.g. C++ standard mandates time complexity of its containers. If you can't assume any specific performance from your system, that means you have to prepare for every system-provided functionality to be exponentially slow and obviously you can't do that.

Take, for instance, the JSON parser in GTA V [0]: apparently, sscanf(buffer, "%d", &n) calls strlen(buffer) internally, so using it to parse numbers in a hot loop on 2 MiB-long JSON craters your performance. On one hand, sure, one can argue that glibc/musl developers are within their right to implement sscanf however inefficiently they want, and the application developers should not expect any performance targets from it, and therefore, probably should not use it. On the other hand, what is even the point of the standard library if you're not supposed to use it for anything practical? Or, for that matter, why waste your time writing an implementation that no-one should use for anything practical anyhow, due to its abysmal performance?

[0] https://news.ycombinator.com/item?id=26296339