This doesn't get talked about enough - the GPU is weak, weak, weak. And anyone who can fix them will go to a serious AI company (for 2-3x the salary).

The GPU is monstrously good. Depending on the workload, the M1 series GPU using 120W could beat an RTX 3090 using 420W.

Same with the CPU. Linux compiled faster on an M1 than on the fastest Intel i9 at the time, again using only 25% of the power budget.

And the M-series has only gotten better.

It is kind of sad Apple neglects helping developers optimize games for the M-series because iDevices and MacBooks could be the mobile gaming devices.

>the M1 series GPU using 120W could beat an RTX 3090 using 420W

You're cooked if you actually believe this

I very recently ran the numbers on these GPUs for an upcoming blog post. The token generation performance is bad, but the prefill performance is _really_ bad.

For a Qwen 3.6 35B / 3B MoE, 4-bit quant:

- parsing a 4k prompt on a M4 Macbook Air takes 17 seconds before generating a single token.

- on an M4 Max Mac Studio it's faster at 2.3 seconds

- on an RTX 5090, it's 142ms.

RTX 5090 uses more power than an M4 Max Mac Studio but it's not 16x more power.

Somehow Apple has always been able to sell their stuff as somehow Magic. Remember the megahertz myth? Apple hertzes and apple bytes are much better than PC hertzes and bytes because they are made by virgin elves during a full moon.

> Apple hertzes and apple bytes are much better than PC hertzes and bytes because they are made by virgin elves during a full moon.

The thing that Apple has always been excellent at is efficiency - even during the Intel era, MacBooks outclassed their Windows peers. Same CPU, same RAM, same disks, so it definitely wasn't the hardware, it was the software, that allowed Apple to pull much more real-world performance out of the same clock cycles and power usage.

Windows itself, but especially third party drivers, are disastrous when it comes to code quality, and they are much much more generic (and thus inefficient) compared to Apple with its very small amount of different SKUs. Apple insisted on writing all drivers and IIRC even most of the firmware for embedded modules themselves to achieve that tight control... which was (in addition to the 2010-ish lead-free Soldergate) why they fired NVIDIA from making GPUs for Apple - NV didn't want to give Apple the specs any more to write drivers.

> NV didn't want to give Apple the specs any more to write drivers.

I think that's a valid demand, considering Nvidia's budding commitment to CUDA and other GPGPU paradigms. Apple, backing OpenCL, would have every reason to break Nvidia's code and ship half-baked drivers. They did it with AMD's GPUs later down the line, pretending like Vulkan couldn't be implemented so they could promote Metal.

Apple wouldn't have made GeForce more efficient with their own firmware, they would have installed a Sword of Damocles over Nvidia's head.

On Geekbench 5, the M1 hits 483 FPS and the RTX 3090 hits 504 FPS.

There are other workloads where the M1 actually beats the 3090.

Apple does plenty of hyping but it's always cute when irrational haters like you put them down. The M1 was (well, is) a marvel and absolutely smokes a 3090 in perf per watt.

What geekbench 5 fps are you talking about? Geekbench only has OpenCL and Vulkan scores for the 3090 as far as I can tell, and the M1 Ultra is less than half the OpenCL score of the 3090. And the M1 Ultra was significantly more expensive.

Find or link these workloads you think exist, please

> The M1 was (well, is) a marvel and absolutely smokes a 3090 in perf per watt.

The GTX 1660 also smokes the 3090 in perf per watt. Being more efficient while being dramatically slower is not exactly an achievement, it's pretty typical power consumption scaling in fact. Perf per watt is only meaningful if you're also able to match the perf itself. That's what actually made the M1 CPU notable. M-series GPUs (not just the M1, but even the latest) haven't managed to match or even come close to the perf, so being more efficient is not really any different than, say, Nvidia, AMD, or Intel mobile GPU offerings. Nice for laptops, insignificant otherwise

Apples and limes.

The context of this thread isn't consumer chips, but Apple's analog to an H/B200.

Well Apple is in the consumer computing business.

* Powered by in-house models they've tried to train and in-house M-series inference servers

TFA is literally about a B2B deal, not consumer compute.

[deleted]

The GPUs are bottom-barrel for compute-focused industries. It is mobile-grade hardware that arguably can't even scale to prior Mac Pro workloads.

> The GPU is monstrously good. Depending on the workload, the M1 series GPU using 120W could beat an RTX 3090 using 420W.

You're just listing the TDP max of both chips. If you limit a 3090 to 120W then it would still run laps around an M1 Max in several workloads despite being an 8nm GPU versus a 5nm one.

> It is kind of sad Apple neglects helping developers optimize games for the M-series

Apple directly advocated for ports like Death Stranding, Cyberpunk 2077 and Resident Evil internally. Advocacy and optimization are not the issue, Apple's obsession over reinventing the wheel with Metal is what puts the Steam Deck ahead.

Edit (response to matthewmacleod):

> Bold of them to reinvent something that hadn't been invented yet.

Vulkan was not the first open graphics API, as most Mac developers will happily inform you.

> Vulkan was not the first open graphics API, as most Mac developers will happily inform you.

OpenGL had become too unmanagable which is why devs moved to DirectX.

Unless you meant a different one?

> The GPUs are bottom-barrel for compute-focused industries. It is mobile-grade hardware that arguably can't even scale to prior Mac Pro workloads.

Surprised Apple didn't create a TPU-like architecture. Another misstep from John Gianneadrea.

I'm confused how anyone ever thought the NPU would be a good idea. The GPU is almost always underutilized on Mac and could do the brunt of the work for inference if it embraced GPGPU principles from the start. Creating a dedicated hardware block to alleviate a theoretical congestion issue is... bewildering. That goes for most NPUs I've seen.

Apple had the technology to scale down a GPGPU-focused architecture just like Nvidia did. They had the money to take that risk, and had the chip design chops to take a serious stab at it. On paper, they could have even extended it to iPhone-level edge silicon similar to what Nvidia did with the Jetson and Tegra SOCs.

I think they built the NPU with whatever models they needed to run on the iPhone in mind vs trying to build a general purpose chip, and then got lucky it was also useful for LLMs.

(Like “I want to do object detection for cutting people into stickers on device without blowing a hole in the battery, make me a chip for that”.)

I'm not sure even Apple thought that, given that they don't officially provide access to ANE internals under macOS (barring unsupported hacks). But if that was fixed, it could then be useful for improving the power efficiency of prefill, where the CPU/GPU hardware is quite weak (especially prior to the M5 Neural Accelerators).

Apple's obsession over reinventing the wheel with Metal

Bold of them to reinvent something that hadn't been invented yet.

[deleted]