Apple tailors their software to run optimally on their hardware. Other OSs have to work on a variety of platforms. Therefore limiting the amount of hardware specific optimizations.

Well I don’t think so.

First, op is talking about Chrome which is not an Apple software. And I can testify that I observed the same behavior with other software which are really not optimized for macOS or even at all. Jetbrains IDEs are fast on M*.

Also, processor manufacturers are contributors of the Linux kernel and have economical interest in having Linux behave as fast as they can on their platforms if they want to sell them to datacenters.

I think it’s something else. Probably unified the memory ?

Chrome uses tons of APIs from MacOS, and all that code is very well optimized by Apple.

I remember disassembling Apple’s memcpy function on ARM64 and being amazed at how much customization they did just for that little function to be as efficient as possible for each length of a (small) memory buffer.

memcpy (and the other string routines) are some of the library functions that most benefit from heavy optimisation and tuning for specific CPUs -- they get hit a lot, and careful adjustment of the code can get major performance wins by ensuring that the full memory bandwidth of the CPU is being used (which may involve using specific load instructions, deciding whether using the simd registers is better or not, and so on). So everybody who cares about performance optimises these routines pretty carefully, regardless of toolchain/OS. For instance the glibc versions are here:

https://github.com/bminor/glibc/tree/master/sysdeps/aarch64/...

and there are five versions specialised for either specific CPU models or for available architecture features.

This argument never passes the sniff test.

You can run Linux on a MacBook Pro and get similar power efficiency.

Or run third party apps on macOS and similarly get good efficiency.

unfortunately, contrarily to popular belief, you cannot run Linux natively on recent macbooks (m4) today.

That doesn’t really affect what I’m saying though. Yes, support capped out with the M2, but you can still observe the properties of efficiency on there.

Depends what "natively" means. You can virtualize Linux through several means such as Virtual Box.

...but you won't get similar power efficiency, which was claimed.

You can run Linux on a MacBook Pro and get similar power efficiency.

What? No. Asahi is spectacular for what it accomplished, but battery life is still far worse than macOS.

I am not saying that it is only software. It's everything from hardware to a gazillion optimizations in macOS.

It’s worse at switching power states, but at a given power state it is within the ball park of macOS power use.

The things where it lags are anything that use hardware acceleration or proper lowering to the lower power states.

The fastest and most efficiency Windows laptop in the world is an M4 MacBook running Parallels.

How does it compare with VMWare? I’d rather not use Parallels…

edit: whoever downvoted - please explain, what's wrong with preferring VMWare? also, for me, historically (2007-2012), it's been more performant, but didn't use it lately.

Looks about the same between Parallels and VMWare: https://browser.geekbench.com/v6/cpu/compare/13494570?baseli...

Also, here's proof that M4 Max running Parallels is the fastest Windows laptop: https://browser.geekbench.com/v6/cpu/compare/13494385?baseli...

M4 Max is running macOS running Parallels running Windows and is only using 14 out of 16 possible cores and it's still faster than AMD's very best laptop chip.

No, it's not, it's absolutely the hardware. The vertical integration surely doesn't hurt, but third-party software runs very fast and efficient on M-series too, including Asahi Linux.

Does Asahi Linux now run efficiently? I tried it on M1 about two years ago. Battery life was maybe 30% of what you get on macOS.