That's a question for AMD and TCMC. They only have so much space on the silica. More memory channels means less of something else. This is not a "framework platform" issue, it's the specification of that CPU.

Well they chose to use this hardware platform. It all sounds like market segmentation to me, now that AMD is on top.

To be clear, AMD is giving you 2x the bandwidth of competing chips and you're complaining that it isn't 4x.

My complaints are the maximum RAM of the system and the modularity of the RAM.

With an increased number of channels, you could have a greater amount of RAM at a lower frequency but at the same bandwidth. So you would at least be able to run some of these much larger AI models.

This isn't ram. this is unified memory. It's shared between GPU and CPU. Soldered VRAM for GPUs have been the norm for probably 20 years because of the latency and reliability required, so why is this any different?

The only way to achieve what you're after is to do any of;

- Give up on unified memory and switch to a traditional platform (which there are thousands of alternatives for)

- Cripple the GPU for games and some productivity software by raising latency beyond the norm.

- Change to a server-class chip for 5x the price.

This is an amazing chip giving server-class specs in a cheap mobile platform, that fill a special nieche in the market for for both productivity and local AI at a very competitive price. What you're arguing for makes no sense.

I don't think that would fit in a laptop which was the original market for this chip.