Why not make a platform with a greater number of channels

Sure, you could. The design would do something like:

We need a bigger memory controller.

To get more traces to the memory controller We need more pins on the CPU.

Now need a bigger CPU package to accommodate the pins.

Now we need a motherboard with more traces, which requires more layers, which requires a more expensive motherboard.

We need a bigger motherboard to accommodate the 6 or 8 dimm sockets.

The additional traces, longer traces, more layers on the motherboard, and related makes the signalling harder, likely needs ECC or even registered ECC.

We need a more expensive CPU, more expensive motherboard, more power, more cooling, and a larger system. Congratulations you've reinvented threadripper (4 channel), siena (6 channel), Threadripper pro (8 channel), or epyc (12 channel). All larger, more expensive more than 2x the power, and is likely to be in a $5-$15k workstation/server not a $2k framework desktop the size of a liter of milk or so.

> We need a more expensive CPU, more expensive motherboard, more power, more cooling, and a larger system. Congratulations you've reinvented threadripper (4 channel), siena (6 channel), Threadripper pro (8 channel), or epyc (12 channel).

This is the real story not the conspiracy-tinged market segmentation one. Which is silly because at levels where high-end consumer/enthusiast Ryzen (say, 9950 X3D) and lowest-end Threadripper/EPYC (most likely a previous-gen chip) just happen to truly overlap in performance, the former will generally cost you more!

Well sort of. Apple makes a competitive mac mini and macbook air with a 128 bit memory interface, decent design, solid build, nice materials, etc starting at $1k. PC laptops can match nearly any aspect, but rarely match the quality of the build, keyboard, trackpad, display, aluminum chassis, etc.

However Apple will let you upgrade to the pro (double the bandwidth), max (4x the bandwidth), and ultra (8x the bandwidth). The m4 max is still efficient, gives decent battery life in a thin light laptop. Even the ultra is pretty quiet/cool even in a tiny mac studio MUCH smaller than any thread ripper pro build I've seen.

Does mystify me that x86 has a hard time matching even a mac mini pro on bandwidth, let alone the models with 2x or 4x the memory bandwidth.

> Does mystify me that x86 has a hard time matching even a mac mini pro on bandwidth, let alone the models with 2x or 4x the memory bandwidth.

The market dynamics are pretty clear. Having that much memory bandwidth only makes sense if you're going to provide an integrated GPU that can use that bandwidth; CPU-based laptop/desktop workloads that bandwidth-hungry are too rare. The PC market has long been relying on discrete GPUs for any high-performance GPU configuration, and the GPU market leader is the one that doesn't make x86 CPUs.

Intel's consumer CPU product line is a confusing mess, but at the silicon level it comes down to one or two designs for laptops (a low-power and a mid-power design) that are both adequately served by a 128-bit memory bus, and one or two desktop designs with only a token iGPU. The rest of the complexity comes from binning on clock speeds and core counts, and sometimes putting the desktop CPU in a BGA package for high-power laptops.

For Intel to make a part following the Strix Halo and Apple strategy, Intel would need to add a third major category of consumer CPU silicon, using far more than twice the total die size of any of their existing consumer CPUs, to go after a niche that's pretty small and very hard for Intel to break into given the poor quality of their current GPU IP. Intel doesn't have the cash to burn pursuing something like this.

It's a bit surprising AMD actually went for it, but they were in a better position than Intel to make a part like Strix Halo from both a CPU and GPU IP perspective. But they still ended up not including their latest GPU architecture, and only went for a 256-bit bus rather than 512-bit.

Yes, but that platform has in-package memory? Which is a higher degree of integration than even "soldered". That's the kind of platform Strix Halo is most comparable to.

(I suppose that you could devise a platform with support for mixing both "fast" in-package and "slow" DIMM-socketed memory, which could become interesting for all sorts of high-end RAM-hungry workloads, not just AI. No idea how that would impact the overall tradeoffs though, might just be infeasible.

...Also if persistent memory (phase-change or MRAM) can solve the well-known endurance issues with flash, maybe that ultimately becomes the preferred substrate for "slow" bulk RAM? Not sure about that either.)

Risk cannabalising sales from their other products?

For example Nvidia seek to ban consumer GPU use in datacenters as they to sell datacentre GPUs.

If they made consumer platforms that can take 1tb of ram etc, then people may choose to not buy EYPC.

Afterall many cloud providers already offer Ryzen VPS's.

my thoughts exactly

That's a question for AMD and TCMC. They only have so much space on the silica. More memory channels means less of something else. This is not a "framework platform" issue, it's the specification of that CPU.

Well they chose to use this hardware platform. It all sounds like market segmentation to me, now that AMD is on top.

To be clear, AMD is giving you 2x the bandwidth of competing chips and you're complaining that it isn't 4x.

My complaints are the maximum RAM of the system and the modularity of the RAM.

With an increased number of channels, you could have a greater amount of RAM at a lower frequency but at the same bandwidth. So you would at least be able to run some of these much larger AI models.

This isn't ram. this is unified memory. It's shared between GPU and CPU. Soldered VRAM for GPUs have been the norm for probably 20 years because of the latency and reliability required, so why is this any different?

The only way to achieve what you're after is to do any of;

- Give up on unified memory and switch to a traditional platform (which there are thousands of alternatives for)

- Cripple the GPU for games and some productivity software by raising latency beyond the norm.

- Change to a server-class chip for 5x the price.

This is an amazing chip giving server-class specs in a cheap mobile platform, that fill a special nieche in the market for for both productivity and local AI at a very competitive price. What you're arguing for makes no sense.

I don't think that would fit in a laptop which was the original market for this chip.