Compression Attached Memory Module (CAMM) tries to be a middle-term solution for that, by reducing how crappy your average RAM socket is to latency and signal integrity issues. But, at this point, I can see CAMM delivered memory being reduced to a sort of slower, "CXL.mem" device.

Seriously though,

Would desoldering the sockets help?

Why are the sockets bad?

As stated previously, the sockets reduce signal integrity, which doesn't necessarily make them "bad," but is why Framework wasn't able to used socketed ram to maximize the potential of this CPU.

This sort-of-interview of Nirav Patel (ceo of framework) explains in a bit more detail: https://www.youtube.com/watch?v=-lErGZZgUbY

Basically, they need to use LPDDR5X memory, which isn't available in socketed form, because of signal integrity reasons.

Which means you won't see an improvement if you solder your ram directly, I think mostly because your home soldering job will suffer signal integrity issues, but also because your RAM isn't LPCAMM and isn't spread across a 256 bit bus.

They "why" hasn't been answered. I understand the previous statements very clearly. It makes intuitive sense to me, but I want to know more.

Like physics PhD-level more.

I believe the reason is, at the frequencies these CPUs are talking to RAM, the reflection coefficient[1] starts playing a big role. This means any change in impedance in the wire cause reflections of the signal.

This is also the reasoning why you can't just have a dumb female to female HDMI coupling and expect video to work. All of such devices are active and read the stream on the input and relay them on the output.

[1]: https://en.wikipedia.org/wiki/Reflection_coefficient

See also RF insertion loss and how that's dealt with, PCIe retimers, etc.

Above certain frequencies, you start running into major issues with signal integrity, and fixing them is very difficult without any extra circuitry.