I may be out of date or wrong, but I recall when the M1 came out there was some claims that x86 could never catch up, because there is an instruction decoding bottleneck (instructions are all variable size), which the M1 does not have, or can do in parallel. Because of that bottleneck x86 needs to use other tricks to get speed and those run hot.

ARM instructions are fixed size, while x86 are variable. This makes a wide decoder fairly trivial for ARM, while it is complex and difficult for x86.

However, this doesn't really hold up as the cause for the difference. The Zen4/5 chips, for example, source the vast majority of their instructions out of their uOp trace cache, where the instructions have already been decoded. This also saves power - even on ARM, decoders take power.

People have been trying to figure out the "secret sauce" since the M chips have been introduced. In my opinion, it's a combination of:

1) The apple engineers did a superb job creating a well balanced architecture

2) Being close to their memory subsystem with lots of bandwidth and deep buffers so they can use it is great. For example, my old M2 Pro macbook has more than twice the memory bandwidth than the current best desktop CPU, the zen5 9950x. That's absurd, but here we are...

3) AMD and Intel heavily bias on the costly side of the watts vs performance curve. Even the compact zen cores are optimized more for area than wattage. I'm curious what a true low power zen core (akin to the apple e cores) would do.

When limited to 5 watts, the Ryzen HX 370 works pretty darn well. In some low-power user cases, my GPD Pocket 4 is more power efficient than my M3 MBA.

We are going to need to see some numbers for your claim. That’s not believable.

A 8.8" screen takes a lot less power.

When you say efficiency, I assume you’re factoring in performance of the device as well?

Maybe run Geekbench 6 and see.

I am not the original commenter; but they said "low-power user cases" i.e. very much not when running Geekbench; rather when it is near idle.

FYI, AMD chips are notoriously bad at idle.

We will need some citations on that as the GPD Pocket 4 isn't even the most power efficient pocket pc.

Closest I've seen is an uncited Reddit thread talking about usb c charging draw when running a task, conflating it with power usage.

How about single-core performance?

But is the uOp trace cache free? It surely doesn’t magically decode and put stuff in there without cost

For sure.. for what it's worth though, I have run across several references to arm also implementing uop caches as a power optimization versus just running the decoders, so I'm inclined to say that whatever it's cost it pays for itself. I am not a chip designer though!

Apple never used a uop cache in their designs. ARM dropped uop caches when they removed 32-bit support. Qualcomm also skipped uop cache.

uop made sense with 32-bit support because the 32-bit ISA was so complex (though still simple compared to x86). Once they went to a simplified instruction design, the cost to decode every single time was lower than the cost of maintaining the uop cache.

Zens don't have a trace cache, just an uop cache.

They can always catch up, it may just take a while. x86's variable size instructions have performance advantages because they fit in cache better.

ARM has better /security/ though - not only does it have more modern features but eg variable length instructions also mean you can reinterpret them by jumping into the middle of one.

No one ever said that. The M1 was not the fastest laptop when it was introduced. It was a nice balance of speed/battery life/heat