Well sort of. Apple makes a competitive mac mini and macbook air with a 128 bit memory interface, decent design, solid build, nice materials, etc starting at $1k. PC laptops can match nearly any aspect, but rarely match the quality of the build, keyboard, trackpad, display, aluminum chassis, etc.

However Apple will let you upgrade to the pro (double the bandwidth), max (4x the bandwidth), and ultra (8x the bandwidth). The m4 max is still efficient, gives decent battery life in a thin light laptop. Even the ultra is pretty quiet/cool even in a tiny mac studio MUCH smaller than any thread ripper pro build I've seen.

Does mystify me that x86 has a hard time matching even a mac mini pro on bandwidth, let alone the models with 2x or 4x the memory bandwidth.

> Does mystify me that x86 has a hard time matching even a mac mini pro on bandwidth, let alone the models with 2x or 4x the memory bandwidth.

The market dynamics are pretty clear. Having that much memory bandwidth only makes sense if you're going to provide an integrated GPU that can use that bandwidth; CPU-based laptop/desktop workloads that bandwidth-hungry are too rare. The PC market has long been relying on discrete GPUs for any high-performance GPU configuration, and the GPU market leader is the one that doesn't make x86 CPUs.

Intel's consumer CPU product line is a confusing mess, but at the silicon level it comes down to one or two designs for laptops (a low-power and a mid-power design) that are both adequately served by a 128-bit memory bus, and one or two desktop designs with only a token iGPU. The rest of the complexity comes from binning on clock speeds and core counts, and sometimes putting the desktop CPU in a BGA package for high-power laptops.

For Intel to make a part following the Strix Halo and Apple strategy, Intel would need to add a third major category of consumer CPU silicon, using far more than twice the total die size of any of their existing consumer CPUs, to go after a niche that's pretty small and very hard for Intel to break into given the poor quality of their current GPU IP. Intel doesn't have the cash to burn pursuing something like this.

It's a bit surprising AMD actually went for it, but they were in a better position than Intel to make a part like Strix Halo from both a CPU and GPU IP perspective. But they still ended up not including their latest GPU architecture, and only went for a 256-bit bus rather than 512-bit.

Yes, but that platform has in-package memory? Which is a higher degree of integration than even "soldered". That's the kind of platform Strix Halo is most comparable to.

(I suppose that you could devise a platform with support for mixing both "fast" in-package and "slow" DIMM-socketed memory, which could become interesting for all sorts of high-end RAM-hungry workloads, not just AI. No idea how that would impact the overall tradeoffs though, might just be infeasible.

...Also if persistent memory (phase-change or MRAM) can solve the well-known endurance issues with flash, maybe that ultimately becomes the preferred substrate for "slow" bulk RAM? Not sure about that either.)