Speed and reliability. A connector of any kind reduces signal quality. Data lines need to be longer, because the memory slot won't fit under the radiator where the memory chips are now, and that adds even more electrical interference and degrades signal.

Also, we had memory slots on '90s cards. They were extremely expensive and proprietary. Ever saw a Matrox VRAM card? I never did.

SOCAMM2 could work. Nvidia's using it on the Vera Rubin boards, as seen here: https://www.pchardwarepro.com/wp-content/uploads/2025/11/que...

I am hoping that we seriously evolve the ATX standard to allow for a socketed GPU board that can also enable user replaceable memory. Seeing an enormous GPU that is larger than the motherboard itself hanging from a PCI slot feels like horse and buggy shit. I'm imaging two boards back-to-back connected by a central high bandwidth bus (which could also do power delivery) that would allow one side of the case to be for CPU/RAM and the other side to be for GPU/VRAM.

Your solution only allows for one GPU, maybe two if the motherboard is really huge, and it doesn't really solve the slotted VRAM problem.

PCI is (was) allowed to be even longer. Old AT and ATX cases had a slotted support bracket to hold the far end of the PCI cards. See how an Adaptec 2400A looks like.

> A connector of any kind reduces signal quality.

Like the M.2 connector?

> Data lines need to be longer

Like the data lines going all the way to an on-motherboard storage device?

Yes and yes. NVMe storage is very slow, so it can get away with such things.

Soldered stuff is still dramatically better than the M2 connector (than any connector really). You've never wondered why RAM doesn't use PCI Express?

> Like the M.2 connector?

Yes, though likely something with a higher pin count since memory access is more likely to be random and can be parallel versus block storage.

> Like the data lines going all the way to an on-motherboard storage device?

Yes. Why would a GPU manufacturer/packager take on that cost, if it’s presently served well enough for most people by offloading it onto other parts of the system?

The current DIMM and SODIMM modules cannot be used for much higher speeds than are available now.

This is why there are several proposals of improved forms for memory modules, which use different sockets, like LPCAMM2, which should be able to work with faster memories.

However even LPCAMM2 is unlikely to work at the speeds of soldered GDDR7.

Can't they make it easier to solder / desolder?

It is not very difficult to solder/desolder, but you need suitable tools, which are not cheap.

Moreover, when you do this manually, unless it is something that you do every day it may be quite difficult to be certain that soldering has been done well enough to remain reliable during long term use. In the industry, very expensive equipment is used to check the quality of soldering, e.g. X-ray machines.

So unlike inserting a memory module in a socket, which is reasonably foolproof, soldering devices is not something that could be used in a product sold to the general population.

When I was young, there still existed computer kits, where you soldered yourself all the ICs on the motherboard, so you could get a computer at a much lower price than for a fully assembled computer. My first PC was of this kind.

However, at that time PCs were still something that was bought by a small fraction of the population, which were people that you could expect to be willing to learn things like how to solder and who would be willing to accept the risk of damaging the product that they have bought. Today PCs are addressed to the general public, so nobody would offer GPU cards that you must solder.