This is quite the silly argument.
* "people" generally don't spend their time compiling the Linux kernel, or anything of the sort.
* For most daily uses, current-gen CPUs are only marginally faster than two generations back. Not worth spending a large amount of money every 3 years or so.
* Other aspects of your computer, like memory (capacity mostly) and storage, can also be perf bottlenecks.
* If, as a developer, you're repeatedly compiling a large codebase - what you may really want is a build farm rather than the latest-gen CPU on each developer's individual PC/laptop.
Just because it doesn't match your situation, doesn't make it a silly argument.
Even though I haven't compiled a Linux kernel for over a decade, I still waste a lot of time compiling. On average, each week I have 5-6 half hour compiles, mostly when I'm forced to change base header files in a massive project.
This is CPU bound for sure - I'm typically using just over half my 64GB RAM and my development drives are on RAIDed NVMe.
I'm still on a Ryzen 7 5800X, because that's what my client specified they wanted me to use 3.5 years ago. Even upgrading to (already 3 years old) 5950X would be a drop-in replacement and double the core count so I'd expect about double the performance (although maybe not quite, as there my be increased memory contention). At current prices for that CPU, that upgrade would pay for itself in terms within 1-2 weeks.
The reason I don't upgrade is policy - my client specified this exact CPU so that my development environment matches their standard setup.
The build farm argument makes sense in an office environment where the majority of developer machines are mostly idle most of the time. It's completely unsuitable for remote working situations where each developer has a single machine and latency and bandwidth to shared resource is slow.
Why is the client so strict on what you use as a dev machine?
I work in game development. All the developers typically have the same spec machine, chosen at the start of the project to be fairly high end with the expectation that when the project ships it'll be a roughly mid range spec.
My most cpu intensive task is running the full test suite of a customer's Rails app. I can probably shave off a large percentage of its running time, but it also contains integration tests run with chrome. What I do to shorten the test time is running only the ones of the files that changed. The boot time of Rails is there anyway.
The CI system is still slower that my laptop. We are not really concerned about it.
I'm waiting for something to fail because $3000 on a laptop won't make me gain $3000 from my customer.