This article skips a few important steps - how a faster CPU will have a demonstrable improvement on developer performance.
I would agree with the idea that faster compile times can have a significant improvement in performance. 30s is long enough for a developer to get distracted and go off and check their email, look at social media, etc. Basically turning 30s into 3s can keep a developer in flow.
The critical thing we’re missing here is how increasing the CPU speed will decrease the compile time. What if the compiler is IO bound? Or memory bound? Removing one bottleneck will get you to the next bottleneck, not necessarily get you all the performance gains you want
I wish I was compiler bound. Nowadays, with everything being in the cloud or whatever I'm more likely to be waiting for Microsoft's MFA (forcing me to pick up my phone, the portal to distractions) or getting some time limited permission from PIM.
The days when 30 seconds pauses for the compiler was the slowest part are long over.
The circuit design software I use, Altium Designer, has a SaaS cloud for managing libraries of components, and version control of projects. I probably spend hours a year waiting for simple things like "load the next 100 parts in this list" or "save this tiny edit to the cloud" as it makes API call after call to do simple operations.
And don't get me started on the cloud ERP software the rest of the company uses...
You must be a web developer. Doing desktop development, nothing is in the cloud for me. I’m always waiting cor my compiler.
More likely in an enterprise company using MS tooling (AD/Entra/Outlook/Teams/Office...) with "stringent" security settings.
It gets ridiculous quickly, really.
In some cases, the bottlenecks are external.
I've seen a test environment which has most assets local but a few shared services and databases accessed over a VPN which is evidently a VIC-20 connected over dialup.
The dev environment can take 20 seconds to render a page that takes under 1 second on prod. Going to a newer machine with twice the RAM bought no meaningful improvement.
They need a rearchitecture of their dev system far more than faster laptops.
> under 1 second on prod
There’s your problem. If your expectation was double-digit milliseconds in prod, then non-prod and its VPN also wouldn’t be an issue.
IO bound compiler would be weird. Memory, perhaps, but newer CPUs also tend to be able to communicate with RAM faster, so...
I think just having LSP give you answers 2x faster would be great for staying in flow.
Compiler is usually IO bound on windows due to NTFS and the small files in MFT and lock contention problem. If you put everything on a ReFS volume it goes a lot faster.
Applies to git operations as well.
by "IO bound" you mean "MS defender bound"
Dev Drive can help with that as well
I've seen gcc+ld use a large amount of disk (dozens of GB) during LTO.
I got my boss to get me the most powerful server we could find, $15000 or so. In benchmarks there was minimal benefit and sometimes a loss going with more than 40 cores even though it has 56. (52? - I can't check now) sometimes using more cores slows the build down. We have concluded that memory bandwidth is the limit, but are not sure how to prove it.
If that's true than have you looked at the threadripper or the new Ryzen AI+ 395? I think it has north of 200gbps
i have not (above machine was a intel), someone else did get a threaeripper though I don't know which. He reborted similar numbers though I think he was able to use more cores still not all.
The larger point is the fastest may not be faster for your workload so benchmark before spending money. Your workload may be different.
A lot of people miss the multi-core advantage. A lot of times the number of cores is an almost linear decrease in compile time.
You do need a good SSD though. There is a new generation of pcie5 SSDs that came out that seems like it might be quite a bit faster.
I don’t think that we live in an era where a hardware update can bring you down to 3s from 30s, unless the employer really cheaped out on the initial buy.
Now in the tfa they compare laptop to desktop so I guess the title should be “you should buy two computers”
Another thing to keep in mind when compiling is adding more cores doesn't help with link time, which is usually stuck to a single core and can be a bottleneck.
there are plenty of linkers that parallelize linking
https://github.com/rui314/mold?tab=readme-ov-file#why-is-mol...
https://llvm.org/devmtg/2017-10/slides/Ueyama-lld.pdf