> If you plot the line it's probably still curving up and I'm not clear at which point (if ever) it would start bending the other way.

I suspect when Moore‘s law ends and we cannot build substantially faster machines anymore.

One interesting thing that most non-systems programmers don’t know is that memory and cpu performance have improved at completely different rates. That’s a large part of why we have x times faster CPUs but software is still slow.

The systems people worry more about memory usage for this reason, and prefer manual memory management.

> ... memory and cpu performance have improved at completely different rates.

This is overly simplified. To a first approximation, bandwidth has kept track with CPU performance, and main memory latency is basically unchanged. My 1985 Amiga had 125ns main-memory latency, though the processor itself saw 250ns latency - current main memory latencies are in the 50-100ns range. Caches are what 'fix' this discrepancy.

You would need to clarify how manual memory management relates to this... (cache placement/control? copying GCs causing caching issues? something else?)

Moore's Law has been dead for a long time. The doubling rate of transistors is now drastically below Moore's prediction.

We're adding transistors at ~18%/year. That's waaaaay below the ~41% needed to sustain Moore's law.

Even the "soft" version of Moore's law (a description of silicon performance vs. literally counting transistors) hasn't held up. We are absolutely not doubling performance every 24 months at this point.

Moore's law has kind of ended already though, and maybe has done for a few years, and even if you can make a chip which is faster there's a basic thermodynamics problem running it at full tilt for any meaningful period of time. I would have expected that to have impacted software development, and I don't think it particularly has, and there's also no obvious gain in e.g. compilers or other optimization which would have countered the effect.

Probably architecture changes (x86 has a lot of historic baggage that difficults newer designs) and also more specialized hardware in the CPU, probably this might also be one of the reasons Apple went this way with the M Silicon

But the machines aren't really "faster" in clock speed— for a long time now the gains have been in better and more local caching + parallelism at both the core and instruction level.

> parallelism at both the core and instruction level

Which most programs don't take advantage of.

Neural networks do, which is part of why they’re taking off right now.

[deleted]