Compression Attached Memory Module (CAMM) tries to be a middle-term solution for that, by reducing how crappy your average RAM socket is to latency and signal integrity issues. But, at this point, I can see CAMM delivered memory being reduced to a sort of slower, "CXL.mem" device.
As stated previously, the sockets reduce signal integrity, which doesn't necessarily make them "bad," but is why Framework wasn't able to used socketed ram to maximize the potential of this CPU.
Basically, they need to use LPDDR5X memory, which isn't available in socketed form, because of signal integrity reasons.
Which means you won't see an improvement if you solder your ram directly, I think mostly because your home soldering job will suffer signal integrity issues, but also because your RAM isn't LPCAMM and isn't spread across a 256 bit bus.
I believe the reason is, at the frequencies these CPUs are talking to RAM, the reflection coefficient[1] starts playing a big role. This means any change in impedance in the wire cause reflections of the signal.
This is also the reasoning why you can't just have a dumb female to female HDMI coupling and expect video to work. All of such devices are active and read the stream on the input and relay them on the output.
Only if you were pushing data through so fast that the bits got corrupted before. That's literally why AMD told Framework they won't support any other configuration than soldered RAM, in this case.
Sockets have resistance and crosstalk, which affects signal integrity.
Wait, your telling me, I should have been desoldering the sockets off my motherboard, and directly soldering my RAM to the leads this entire time?
Compression Attached Memory Module (CAMM) tries to be a middle-term solution for that, by reducing how crappy your average RAM socket is to latency and signal integrity issues. But, at this point, I can see CAMM delivered memory being reduced to a sort of slower, "CXL.mem" device.
Seriously though,
Would desoldering the sockets help?
Why are the sockets bad?
As stated previously, the sockets reduce signal integrity, which doesn't necessarily make them "bad," but is why Framework wasn't able to used socketed ram to maximize the potential of this CPU.
This sort-of-interview of Nirav Patel (ceo of framework) explains in a bit more detail: https://www.youtube.com/watch?v=-lErGZZgUbY
Basically, they need to use LPDDR5X memory, which isn't available in socketed form, because of signal integrity reasons.
Which means you won't see an improvement if you solder your ram directly, I think mostly because your home soldering job will suffer signal integrity issues, but also because your RAM isn't LPCAMM and isn't spread across a 256 bit bus.
They "why" hasn't been answered. I understand the previous statements very clearly. It makes intuitive sense to me, but I want to know more.
Like physics PhD-level more.
I believe the reason is, at the frequencies these CPUs are talking to RAM, the reflection coefficient[1] starts playing a big role. This means any change in impedance in the wire cause reflections of the signal.
This is also the reasoning why you can't just have a dumb female to female HDMI coupling and expect video to work. All of such devices are active and read the stream on the input and relay them on the output.
[1]: https://en.wikipedia.org/wiki/Reflection_coefficient
See also RF insertion loss and how that's dealt with, PCIe retimers, etc.
Above certain frequencies, you start running into major issues with signal integrity, and fixing them is very difficult without any extra circuitry.
Only if you were pushing data through so fast that the bits got corrupted before. That's literally why AMD told Framework they won't support any other configuration than soldered RAM, in this case.
Yes. (That isn't actually possible because the pinouts are different but soldered RAM is faster.)
You might be able to dial in a higher memory overclock.
mind the gap