> Very few applications scale with cores
You mean like compilers and test suites ? Very few professional workloads don't parallelize well these days.
> Very few applications scale with cores
You mean like compilers and test suites ? Very few professional workloads don't parallelize well these days.
Compilers and test suits do scale (at least for C/C++ and Rust, which is what I work with). But I think the parent comment referred to consumer applications: games, word processing, light browsing, ...
(Though games these days scale better than they used to, but only up to a to a point.)
I find that most tools I write for my own use can be made to scale with cores, or run so fast that the overhead of starting threads is longer than the program runtime. But I write that in Rust which makes parallelism easy. If I wrote that code in C++ I would probably not bother with trying to parallelize.
But those tools aren't really compute bound anyway - you're not buying a workstation to do them, you're getting a consumer laptop or a tablet.
And that consumer device should have ECC! That's the whole discussion here.
It's confusing because a few comments up is "for the vast majority of people single core performance is all they care about, it's also cheaper" which is unrelated to ECC.
I think it's coherent -- it's an argument for why most people don't want to buy Workstation class products just to get ECC. (Prices scale with core count. Not linearly, but still.)
Why ? If your device is a thin client for web services/gaming the risk of bitflips/bad ram is a minor annoyance.
I disagree with your handwaving bitflips away as a minor annoyance. Consumers don't love software crashing, even if they don't have any data they care about.
Imagine ECC was free -- would you rather have free ECC and no bitflips, or no ECC and bitflips? It's hard to imagine choosing bitflips.
ECC would save an unbelievable amount of labor. A shocking number of people have jobs looking at various logs.
Test suites often don't scale, actually. Unit tests usually run single-threaded by default, and also relatively often have side effects on the system that mean they're unsafe to run in parallel. (Sure, sure, you could definitely argue the latter thing is a skill issue.)
In theory, do you need a single machine for any of that, or would it be cheaper to use a low-availability cloud cluster? Tests are totally independent, and builds probably parallel enough.
Only a small percentage of computer users are programmers.