Yes. Signal integrity is so finicky at frequencies DRAM operates that whether you drill the plated holes on boards that complete the circuit to go completely through the board or stop it halfway starts to matter due to signals permeating into the stubs of the holes and reflecting back into the trace causing interference. Adding a connector between RAM and CPU is like extending that long pole in the tent in the middle by inserting a stack of elephant into what is already shaped like an engine crankshaft found in a crashed wreck of a car.

Besides, no one strictly need mid-life upgradable RAMs. You're just wanting to be able to upgrade RAM later after purchase because it's cheaper upfront and also because it leaves less room for supply side for price gouging. Those aren't technical reasons you can't option a 2TB RAM on purchase and be done for 10 years.

In the past, at least, RAM upgrades weren't just about filling in the slots you couldn't afford to fill on day one. RAM modules also got denser and faster over time too. This meant you could add more and better RAM to your system after waiting a couple years than it was even physically possible to install upfront.

Part of the reason I have doubts about the physical necessity here is because PCI Express (x16) is roughly keeping up with GDDR in terms of bandwidth. Of course they are not completely apples-to-apples comparable, but it proves at least that it's possible to have a high-bandwidth unsoldered interface. I will admit though that what I can find indicates that signal integrity is the biggest issue each new generation of PCIe has to overcome.

It's possible that the best solution for discrete PC components will be to move what we today call RAM onto the CPU package (which is also very likely to become a CPU+GPU package) and then keep PCIe x16 around to provide another tier of fast but upgradeable storage.

I am personally dealing with PCIe signal integrity issues at work right now, so I can say yes, it’s incredibly finicky once you start going outside of the simple “slot below CPU” normal situation. And I only care about Gen 3 speeds right now.

But in general yes, PCIe vs RAM bandwidth is like comparing apples to watermelons. One’s bigger than the other and they’re both fruits, but they’re not the same thing.

Generally people don’t talk about random-access PCIe latency because it generally doesn’t matter. You’re looking at a best-case 3x latency penalty for PCIe vs RAM, usually more like an order of magnitude or more. PCIe is really designed for maximum throughput, not minimum latency. If you make the same tradeoffs with RAM you can start tipping the scale the other way - but people really care about random access latency in RAM (almost like it’s in the name) so that generally doesn’t happen outside of specific scenarios. 500ns 16000MT/s RAM won’t sell (and would be a massive pain - you’d probably need to 1.5x bus width to achieve that, which means more pins on the CPU, which means larger packages, which means more motherboard real estate taken and more trace length/signal integrity concerns, and you’d need to somehow convince everyone to use your new larger DIMM...).

You can also add more memory channels to effective double/quadruple/sextuple memory bandwidth, but again, package constraints + signal integrity increases costs substantially. My threadripper pro system does ~340GB/s and ~65ns latency (real world) with 8 memory channels - but the die is huge, CPUs are expensive as hell, and motherboards are also expensive as hell. And for the first ~9 months after release the motherboards all struggled heavily with various RAM configurations.