On Cinebench 2025 single threaded, M4 is roughly 4x more efficient and 50% faster than Strix Halo. These numbers can be verified by googling Notebookcheck.
How many iterations to match Apple?
On Cinebench 2025 single threaded, M4 is roughly 4x more efficient and 50% faster than Strix Halo. These numbers can be verified by googling Notebookcheck.
How many iterations to match Apple?
yes and no. i have macbook pro m4 and a zbook g1a (ai max 395+ ie strix halo)
In day to day usage the strix halo is significantly faster, and especially when large context LLM and games are used - but also typical stuff like Lightroom (gpu heavy) etc.
on the flip side the m4 battery life is significantly longer (but also the mpb is approx 1/4 heavier)
for what its worth i also have a t14 with a snapdragon X elite and while its battery is closer to a mbp, its just kinda slow and clunky.
so my best machine right now is the x86 actually!
Here's a comparison I did between Strix Halo, M4 Pro, M4 Max: https://imgur.com/a/yvpEpKF
As you can see, Strix Halo is behind M4 Pro in performance and severely behind in efficiency. In ST, M4 Pro is 3.6x more efficient and 50% faster. It's not even close to the M4 Max.
Because it uses a metal enclosure.Someone has these two machines, and claims the x86 feels faster in his work.
You don't own any of the machines but have "made" a comparison by copying data from the internet I assume.
This is like explaining to someone who eats a sweet apple that the internet says the apple isn't sweet.
MacBook Pro, 2TB, 32gb, 3200 EUR
HP G1a, 2TB, 128gb, 3700 EUR
If we don't compare laptops but mini-PCs,
Evo X2, 2TB, 128gb, 2000 EUR,
Mac Mini, 2TB, 32gb, 2200 EUR
Their point is that they’re comparing between SoCs that aren’t in the same class, not that it’s not fast.
They’re not arguing against their subjective experience using it, they’re arguing against the comparison point as an objective metric.
If you’re picking analogies, it’s like saying Audis are faster than Mercedes but comparing an R8 against an A class.
1. Everyone is different, I don't care if a computer is worse on paper if it's better in real
2. I'd say apples and oranges is subjective and depends on what is important to you. If you're interested in Vitamin C, apples to oranges is a valid comparison. My interest in comparing this is for running local coding LLMs - and it is difficult to get great results on 24/32gb of Nvidia VRAM (but by far the fastest option/$ if your model fits into a 5090). For models to work with you often need 128gb of RAM, therefor I'd compare a Mac Studio 128gb (cheapest option from Apple for a 128gb RAM machine) with a 395+ (cheapest (only?) option for x86/Linux). So what is apples to oranges to you, makes sense to many other people.
3. Why would you think a 395+ and an M4 Pro are in "a different class"?
Let me start with your last point because it’s where you’ve misread the original comment and why none of your following arguments seem to make sense to onlookers.
They have a MacBook Pro with an M4, not an M4 Pro. That is a wildly different class of SoC from the 395. Unless the 395 is also capable of running in fanless devices too without issue.
For your first point, yes it does matter if the discussion is about objectively trying to understand why things are faster or not. Subjective opinions are fine, but they belong elsewhere. My grandma finds her Intel celeron fast enough for her work, I’m not getting into an argument with her over whether an i9 is faster for the same reason.
Your second point is equally as subjective, and out of place in a discussion about objectively trying to understand what makes the performance difference.
I didn't "max out" the SSD, I chose an SSD to match the machine of the user.
You: "You're comparing the base M4 to a full fat Strix Halo that costs nearly $4,000."
Then
You: "A little disingenuous to max out on the SSD to make the Apple product look worse."
That's why your post was disingenuous.
If it helps you focus on what the actual discussion, we are comparing maximum CPU and GPU speeds for the dollar. That's it.
Evo X2, 128gb, 2000 EUR
Max Studio, 128gb, 4400 EUR
Great. Here's what you're getting between an M4 Max vs an AMD AI 395+: https://imgur.com/a/yvpEpKF
And of course, the Mac Studio itself is a much more capable box with things like Thunderbolt5, more ports, quieter, etc.
I can see why some people would choose the AMD solution. It runs x86, works well with Linux, can play DirectX games natively, and is much cheaper.
Meanwhile, the M4 Max performs significantly better, more efficient, likely much more quiet, runs macOS, more ports, better build quality, Apple backing and support.
AMD 395+/ Cachyos MT Geekbench 6 25334 - not sure where you get your Geekbench number from for the 395+
You: "If it helps you focus on what the actual discussion, we are comparing maximum CPU and GPU speeds for the dollar."
You: "Mac Studio itself is a much more capable box with things like Thunderbolt5, more ports, quieter"
Source: https://browser.geekbench.com/v6/cpu/13318696
You: AMD 395+/ Cachyos MT Geekbench 6 25334
Me: https://imgur.com/a/yvpEpKF
I also have a Strix Halo zbook G1A and I am quite disappointed in the idle power consumption as it hovers around 8W.
Adding to that, it is very picky about which power brick it accepts (not every 140W PD compliant works) and the one that comes with the laptop is bulky and heavy. I am used to plugging my laptop into whatever USB-C PD adapter is around, down to 20W phone chargers. Having the zbook refuse to charge on them is a big downgrade for me.
> Adding to that, it is very picky about which power brick it accepts (not every 140W PD compliant works) and the one that comes with the laptop is bulky and heavy. I am used to plugging my laptop into whatever USB-C PD adapter is around, down to 20W phone chargers. Having the zbook refuse to charge on them is a big downgrade for
It's Dell, they are probably not actually using PD3.1 to achieve the 140w mark, instead they are prolly using PD3.0 extension and shove 20v7a into the laptop. I can't find any info, but you can check on the charger.
If it lists 28V then it's 3.1, else 3.0. If it's 3.1 you can get a Baseus PowerMega 140W PD3.1, seems like a reeeeally solid charger from my limited use.
It is HP, and the output of the provided adapter does 28V 5A, so in spec.
With some of the other 28V 5A adapters I have, it charges until triggering a compute heavy task and then stops. I have seen reports online of people seeing this behavior with the official adapter. My theory is that the laptop itself does not accept any ripple at all.
Ah my bad. Are you sure your cable can do 140w? That was the source of most of my pains trying to push 100w to my work laptop. Baseus and Anker have some good PD3.1 chipped cables that worked for me. What kind of charger are you using?
I am also on search of good portable brick to replace 140w. I found 100w Anker Prime was working well. And surprisingly there is almost identical 3 port Baseus 100w GaN but half the price. For some reason it is hard to come by (they have few other 100w bricks that are not so portable) i think it might be discontinued.
The important part of this is 'Single threaded'. Whereas if you are actually using Cinebench to do real rendering you would always want multi core performance. Which pretty much makes Apples single Core benchmark results pointless.
> How many iterations to match Apple?
Why are you asking me? I'm not in charge of AMD.
Yes the Strix Halo is not as fast on the benchmarks as the M4 Max, its bandwidth is lower, and the max config has less memory. However, it is available in a lot of different configurations and some are much cheaper than comparable M4 systems (e.g. the maxed out Framework desktop is $2000.) It's a tradeoff, as everything in life is. No need to act like such an Apple fanboi.
On one of the few workloads where massive parallelism makes sense, why quote a single threaded number? I'm curious.
To show in real numbers why people always say a Macbook always feels miles ahead of AMD and Intel in actual real world experience.
The primary reason is the ST speed (snappy feeling) and the efficiency (no noise, cool, long battery life).
It just so happens that Cinebench 2025 is the only power measurement metric I have available via Notebookcheck. If Notebookcheck did power measurements for GB6, I'd rather use that as it's a better CPU benchmark overall.
Cinebench 2025 is a decent benchmark but not perfect. It does a good enough job of demonstrating why the experience of using Apple Silicon is so much better. If we truly want to measure the CPU architecture like a professional, we would use SPEC and the measure power from the wall.
>How many iterations to match Apple?
Until AMD can built a tailor made OS for their chips and build their own laptops.
Here's an M4 Max running macOS running Parallels running Windows compared to AMD's very best laptop chip:
https://browser.geekbench.com/v6/cpu/compare/13494385?baseli...
M4 Max is still faster. Note that the M4 Max is only given 14 out of 16 cores, likely reserving 2 of them for macOS.
How do you explain this when Windows has zero Apple Silicon optimizations?
Maybe Geek bench is not a good benchmark?
Maybe it is? Cinebench favors Apple even more.
GB correlates highly with SPEC. AMD also uses GB in their official marketing slides.
Geekbench is the closest thing to a good benchmark that's usable across generations and architectures.