Hi,

My daily workhorse is a M1 Pro that I purchased on release date, It has been one of the best tech purchases I have made, even now it really deals with anything I throw at it. My daily work load is regularly having a Android emulator, iOS simulator and a number of Dockers containers running simultaneously and I never hear the fans, battery life has taken a bit of a hit but it is still very respectable.

I wanted a new personal laptop, and I was debating between a MacBook Air or going for a Framework 13 with Linux. I wanted to lean into learning something new so went with the Framework and I must admit I am regretting it a bit.

The M1 was released back in 2020 and I bought the Ryzen AI 340 which is one of the newest 2025 chips from AMD, so AMD has 5 years of extra development and I had expected them to get close to the M1 in terms of battery efficiency and thermals.

The Ryzen is using a TSMC N4P process compared to the older N5 process, I managed to find a TSMC press release showing the performance/efficiency gains from the newer process: “When compared to N5, N4P offers users a reported +11% performance boost or a 22% reduction in power consumption. Beyond that, N4P can offer users a 6% increase in transistor density over N5”

I am sorely disappointed, using the Framework feels like using an older Intel based Mac. If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

Why haven’t AMD/Intel been able to catch up? Is x86 just not able to keep up with the ARM architecture? When can we expect a x86 laptop chip to match the M1 in efficiency/thermals?!

To be fair I haven’t tried Windows on the Framework yet it might be my Linux setup being inefficient.

Cheers, Stephen

Battery efficiency comes from a million little optimizations in the technology stack, most of which comes down to using the CPU as little as possible. As such the instruction set architecture and process node aren't usually that important when it comes to your battery life.

If you fully load the CPU and calculate how much energy a AI340 needs to perform a fixed workload and compare that to a M1 you'll probably find similar results, but that only matters for your battery life if you're doing things like blender renders, big compiles or gaming.

Take for example this battery life gaming benchmark for an M1 Air: https://www.youtube.com/watch?v=jYSMfRKsmOU. 2.5 hours is about what you'd expect from an x86 laptop, possibly even worse than the fw13 you're comparing here. But turn down the settings so that the M1 CPU and GPU are mostly idle, and bam you get 10+ hours.

Another example would be a ~5 year old mobile qualcomm chip. It's a worse process node than an AMD AI340, much much slower and significantly worse performance per watt, and yet it barely gets hot and sips power.

All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.

> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding. I've had to enable gpu video decoding on my fw16 and haven't noticed the fans on youtube.

A huge reason for the low power usage is the iPhone.

Apple spent years incrementally improving efficiency and performance of their chips for phones. Intel and AMD were more desktop based so power efficiency wasnt the goal. When Apple's chips got so good they could transition into laptops, x86 wasn't in the same ballpark.

Also the iPhone is the most lucrative product of all time (I think) and Apple poured a tonne of that money into R&D and taking the top engineers from Intel, AMD, and ARM, building one of the best silicon teams.

Apple purchased Palo Alto Semi which made the biggest difference. One of their best acquisitions ever in my opinion… not that they make all that many of those anyway.

Apple actually makes a lot more acquisitions than you think, but they are rarely very high profile/talked about: https://en.wikipedia.org/wiki/List_of_mergers_and_acquisitio...

> One of their best acquisitions ever in my opinion…

NeXT? But yes, I completely get what you’re saying, I just couldn’t resist. It was an amazingly long sighted strategic move, for sure.

Textbook Innovator’s Dilemma.

> and Apple poured a tonne of that money into R&D and taking the top engineers from Intel, AMD, and ARM, building one of the best silicon teams.

how much silicon did Apple actually create? I thought they outsourced all the components?

They bought Palo Alto Semiconductor in 2008 which is where all their ARM chip designs came from.

https://en.wikipedia.org/wiki/P.A._Semi

Outsourced to who? The only companies with the engineers you’d need are the other CPU makers like Intel, AMD, Qualcomm, and Nvidia. And none of them make a CPU as efficient as Apple does.

cpu yes, but what about the rest of the iphone?

They design much more in house than any other smartphone brand, except maybe Samsung.

CPU, GPU, neural processor, image signal processor, U1 chip for device tracking, Secure Enclave for biometrics, a 5G modem (only used in the 16e so far)…

They don’t manufacture the chips in house of course. They contract that out to TSMC and other companies.

Arm exists, it is unknown how much tech apple gets from Arm.

Arm licenses their designs to everybody. They are okay, but you are never going to make market leading processors by using the Arm designs.

The M1 and M2 were beating the best-in-class i7 when they were relased IIRC

Apple took the ARM base design (they licensed it), and then they modified and tweaked it.

You get the ARM ISA, and compilers that work for ARM will compile to Apple Silicon. It's just that the actual hardware you get, is better than the base design, and therefore beats other ARM processors in benchmarks.

> Apple took the ARM base design (they licensed it), and then they modified and tweaked it.

More likely it was derived from PWRficient, or a clean sheet design that took lessons from it.

It's more than that. They have an unlimited license to arm designs, and can change them as they see fit, since they were an early investors (or something along those lines). Other manufacturers can't get these terms, or if they can, it will be prohibtly expensive

The thing about Apple having a “special license” due to being a partial founder of Arm is an urban legend. They have an architectural license, just like several other companies making custom Arm CPUs do.

Apple has an architectural license that lets them build their own ARM cores:

https://www.electronicsweekly.com/news/business/finance/arm-...

It is very unlikely Apple uses anything from ARM’s core designs, since that would require paying an additional license fee and Apple was able to design superior cores using its architectural license.

Yep, Apple was a significant early investor in ARM. https://appleinsider.com/articles/23/09/05/apple-arm-have-be...

[deleted]

And tsmc (and therefore asml etc), usually apple reserves the newest upcoming node for their own production.

Besides Apple's SoCs they also have made dedicated silicon for secure enclaves, wifi, bluetooth, ultra-wideband, and cellular radios, and motion coprocessors.

Apple bought PA Semi a long time ago. They have a significant silicon development group. Their architecture license (they were an early investor in ARM) for ARM means they get to basically do whatever they want using the ARM ISA. The SoCs in pretty much all their devices are designed in-house.

what about all the components and sensors

Apple has bought startups with various technologies like Anobit, that developed advanced flash memory controllers, and have funded development efforts by partners. For example Apple worked hand in glove with Sharp to develop the tech for their 5K display panels. They also now have their own cellular chip designs in some models, in their quest for independence from Qualcomm. That’s all from memory, I’m sure there are many more examples.

I vaguely remember Intel tried to get into the low power / smartphone / table space at the time with their Atom line [0] in the late 00's, but due to core architecture issues they could never reach the efficiency of ARM based chips.

[0] https://en.wikipedia.org/wiki/Intel_Atom

I don't think it was core architecture issues. My impression is that over the years their efforts to get into low-power devices never got the full force of their engineering prowess.

I don't think it is so much efficiency of their chips for their hardware (phones) so much as efficiency of their OS for their chips and hardware design (like unified memory).

It is likely the hardware effiency of their chips. Apple SoCs running industry-standard benchmarks still run very cool, yet still show dominant performance. The OS efficiency helps, but even under extreme stress tests like SPEC, the Apple SoCs dominate in perf & power.

See Lunar Lake on TSMC N3B, 4+4, on-package DRAM versus the M3 on TSMC N3B, 4+4, on-package DRAM: https://youtu.be/ymoiWv9BF7Q?t=531

The 258V (TSMC N3B) has a worse perf / W 1T curve than the Apple M1 (TSMC N5).

> It is likely the hardware effiency of their chips. Apple SoCs running industry-standard benchmarks still run very cool, yet still show dominant performance

Dieselgate?

I have heard that Apple Silicon chips are designed around the retain-release cycle that goes back to NeXT and is still here today (hidden by ARC compilation), but I don't think that's the whole story. Back when the M1's came out, many benchmarks showed virtualized Windows blowing the doors off of market-equivalent x86 CPUs.

Also, there's the obvious benefits of being TSMC's best customer. And when you design a chip for low power consumption, that means you've got a higher ceiling when you introduce cooling.

The SoC benefits are being ignored by some people here. Apple doesn't control every piece of software as some here posit, however, OS optimizations and utilization of extra-efficiency cores (though still requiring SoC design they do also need specific OS code support) are part of the performance.

I thought they just acquired P.A. Semi, job done.

When they bought PA Semi the company worked on IBM Power architecture chips. It was very much the team Apple was after, not any one particular technology.

that was a part of it, yes.

but do not forget how focused they (amd/intel, esp in opteron days -- edit) were on the server market.

> A huge reason for the low power usage is the iPhone.

No, the main reason for better battery life is the RISC architecture. PC on ARM architecture has the same gains.

Any downvoters care to actually leave me a reply telling me why?

Im not wrong!

Because it’s a take thst sounds like someone who has been reading comp.sys.mac.advocacy from 1995 when the PPC vs x86 wars were going on (and when PPC chips were already behind in performance) up through 2005 when Apple gave up and went to Intel.

> All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.

Apple is vertically integrated and can optimize at the OS and for many applications they ship with the device.

Compare that to how many cooks are in the kitchen in Wintel land. Perfect example is trying to get to the bottom of why your windows laptop won't go to sleep and cooks itself in your backpack. Unless something's changed, last I checked it was a circular firing squad between laptop manufacturer, Microsoft and various hardware vendors all blaming each other.

> Apple is vertically integrated and can optimize

> Compare that to how many cooks are in the kitchen in Wintel land. Perfect example is trying to get to the bottom of why your windows laptop won't go to sleep and cooks itself in your backpack

So, I was thinking like this as well, and after I lost my Carbon X1 I felt adventurous, but not too adventurous, and wanted a laptop that "could just work". The thinking was "If Microsoft makes both the hardware and the software, it has to work perfectly fine, right?", so I bit my lip and got a Surface Pro 8.

What a horrible laptop that was, even while I was trialing just running Windows on it. Overheated almost immediately by itself, just idling, and STILL suffers from the issue where the laptop sometimes wake itself while in my backpack, so when I actually needed it, of course it was hot and without battery. I've owned a lot of shit laptops through the years, even some without keys in the keyboard, back when I was dirt-poor, but the Surface Pro 8 is the worst of them all, I regret buying it a lot.

I guess my point is that just because Apple seem really good at the whole "vertically integrated" concept, it isn't magic by itself, and Microsoft continues to fuck up the very same thing, even though they control the entire stack, so you'll still end up with backpack laptops turning themselves on/not turning off properly.

I'd wager you could let Microsoft own every piece of physical material in the world, and they'd still not be able to make a decent laptop.

Surprised to hear this. Back in the Surface Pro 4 days, the hardware was great. I made it through college doing 95% of my work on a Surface Pro 4 tablet with the magnetic keyboard and almost always made it through the entire day without having to plug it in.

My wife swears by her surface pros, and she has owned a few.

I've had a few Surface Book 2's for work, and they were fine except: needed more RAM, and there was some issue with connection between screen and base which make USB headsets hinky.

Apple has been vertically integrate for 50 years. Microsoft has been horizontally integrated for 50 years.

That's why Apple is good at making a whole single system that works by itself, and Microsoft is good at making a system that works with almost everything almost everyone has made almost ever.

Also on the HN front page today:

> Framework 16

> The 2nd Gen Keyboard retains the same hardware as the 1st Gen but introduces refreshed artwork and updated firmware, which includes a fix to prevent the system from waking while carried in a bag.

There are some reports of this with Macbooks as well. But my (non-scientific) impression is that a lot more people in Wintel land are seeing it. All of my work laptops, and a few of my personal laptops have done this to me since I started using Windows 10/11.

Microsoft is pushing "Modern Standby" over actual sleep, so laptops can download and install updates while closed at night.

I remember a time when this was supposed to be Wintel's advantage. It's really strange to now be in a time where Apple leads the consumer computing industry in hardware performance, yet is utterly failing at evolving the actual experience of using their computers. I'm pretty sure I'm not the only one who would gladly give up a bit of performance if it were going to result in a polished, consistent UI/UX based on the actual science of human interface design rather than this usability hellscape the Alan Dye era is sending us into.

macOS is a resource hungry pig, I wouldn't bet too much on it making a difference.

> It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding. I've had to enable gpu video decoding on my fw16 and haven't noticed the fans on youtube.

I've worked in video delivery for quite a while.

If I were to write the law, decision-makers wilfully forcing software video decoding where hardware is available would be made to sit on these CPUs with their bare buttocks. If that sounds inhumane, then yes, this is the harm they're bringing upon their users, and maybe it's time to stop turning the other cheek.

I run Linux Mint Mate on a 10 year old laptop. Everything works fine, but watching YouTube makes my wireless USB dongle mouse stutter a LOT. Basically if CPU usage goes up, mouse goes to hell.

Are you telling me that for some reason it's not using any hardware acceleration available while watching YouTube? How do I fix it?

It's probably the 2.4GHz WiFi transmitter interfering with the 2.4GHz mouse transmitter. You probably notice it during YouTube because it's constantly downloading. Try a wired mouse.

Interesting theory. The wired mouse is trouble free, but I figured that's because of a better sampling rate and less overhead over all. Maybe I'll try a bluetooth mouse or some other frequency, or the laptop on fired Ethernet to see if the theory pans out.

Easiest way is to use Chrome or a Chrome based browser since they bundle codecs with the browser. If you're using Firefox, need to make sure you have the codecs. I know nothing about Mint specifically though to know if they'd automatically install codecs or not.

You specifically don't want to use the bundled codecs since those would be CPU decode only.

Interesting. I'll look into that more.

Im using Brave and it seems the enable hardware acceleration box is checked.

  All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.
This isn't true. Yes, uncore power consumption is very important but so is CPU load efficiency. The faster the CPU can finish a task, the faster it can go back to sleep, aka race to sleep.

Apple Silicon is 2-4x more efficient than AMD and Intel CPUs during load while also having higher top end speed.

Another thing that makes Apple laptops feel way more efficient is that they use a true big.Little design while AMD and Intel's little cores are actually designed for area efficiency and not power efficiency. In the case of Intel, they stuff as many little cores as possible to win MT benchmarks. In real world applications, the little cores are next to useless because most applications prefer a few fast cores over many slow cores.

> Apple Silicon is 2-4x more efficient than AMD and Intel CPUs during load while also having higher top end speed.

This is not true. For high-throughput server software x86 is significantly more efficient than Apple Silicon. Apple Silicon optimizes for idle states and x86 optimizes for throughput, which assumes very different use cases. One of the challenges for using x86 in laptops is that the microarchitectures are server-optimized at their heart.

ARM in general does not have the top-end performance of x86 if you are doing any kind of performance engineering. I don't think that is controversial. I'd still much rather have Apple Silicon in my laptop.

  For high-throughput server software x86 is significantly more efficient than Apple Silicon.
In the server space, x86 has the highest performance right now. Yes. That's true. That's also because Apple does not make server parts. Look for Qualcomm to try to win the server performance crown in the next few years with their Oryon cores.

That said, Graviton is at least 50% of all AWS deployments now. So it's winning vs x86.

  ARM in general does not have the top-end performance of x86 if you are doing any kind of performance engineering. I don't think that is controversial.
I think you'll have to define what top-end means and what performance engineering means.

> Apple Silicon is 2-4x more efficient than AMD and Intel CPUs during load while also having higher top end speed.

This is false, in cross platform tasks it's on par if not worse than latest X86 arches. As others pointed out: 2.5h in gaming is about what you'd expect from a similarly built X86 machine.

They are willing due to lower idle and low load consumption, which they achieve by integrating everything as much as possible - something that's basically impossible for AMD and Intel.

> The faster the CPU can finish a task, the faster it can go back to sleep, aka race to sleep.

May have been true when CPU manufacturers left a ton of headroom on the V/F curve, but not really true anymore. Zen 4 core's power draw shoots up sharply pass 4.6 GHz and nearly triples when you approach 5.5 GHz (compared to 4.6), are you gonna complete the task 3 times faster at 5.5 GHz?

  This is false, in cross platform tasks it's on par if not worse than latest X86 arches.
This is Cinebench 2024, a cross platform application: https://imgur.com/a/yvpEpKF

  They are willing due to lower idle and low load consumption, which they achieve by integrating everything as much as possible - something that's basically impossible for AMD and Intel.
Weird because LNL achieved similar idle wattage as Apple Silicon.[0] Why do you say it's impossible?

  May have been true when CPU manufacturers left a ton of headroom on the V/F curve, but not really true anymore. Zen 4 core's power draw shoots up sharply pass 4.6 GHz and nearly triples when you approach 5.5 GHz (compared to 4.6), are you gonna complete the task 3 times faster at 5.5 GHz?
Honestly not sure how your statement is relevant.

[0]https://www.notebookcheck.net/Dell-XPS-13-9350-laptop-review...

This is Cinebench 2025, a cross platform application: https://imgur.com/a/yvpEpKF

You sure like that table, don't you? Trying to find the source of that blender numbers, I came across many reddit posts of you with that exact same table. Sadly those also don't have a source - the are not from the notebookcheck source.

The reason why I keep reposting this table is because people post incorrect statements about AMD/Apple so often, often with zero data backing.

For Blender numbers, M4 Pro numbers came from Max Tech's review.[0] I don't remember where I got the Strix Halo numbers from. Could have been from another Youtube video or some old Notebookcheck article.

Anyway, Blender has official GPU benchmark numbers now:

M4 Pro: 2497 [1]

Strix Halo: 1304 [2]

So M4 Pro is roughly 90% faster in the latest Blender. The most likely reason for why Blender's official numbers favors M4 Pro even more is because of more recent optimizations.

Sources:

[0]https://youtu.be/0aLg_a9yrZk?si=NKcx3cl0NVdn4bwk&t=325

[1] https://opendata.blender.org/devices/Apple%20M4%20Pro%20(GPU...

[2] https://opendata.blender.org/devices/AMD%20Radeon%208060S%20...

Weren't we comparing CPUs though? Those Blender benchmarks are for GPUs.

Here is M4 Max CPU https://opendata.blender.org/devices/Apple%20M4%20Max/ - median score 475

Ryzen MAX+ PRO 395 shows median score 448 (can't link because the site does not seem to cope well with + or / in product names)

Resulting in M4 winning by 6%

  Weren't we comparing CPUs though? Those Blender benchmarks are for GPUs.
Yes, but I was asked about Blender GPU.

Blender CPU tasks are highly parallel. AMD's Ryzen Max 395 has great MT performance. It's generally 5-20% slower in CPU MT than the M4 Max depending on the application.

> Weird because LNL achieved similar idle wattage as Apple Silicon.[0] Why do you say it's impossible?

And where is LNL now? How's the company that produced it? Even under Pat Gelsinger they said that LNL is a one off and they're not gonna make any more of them. It's commercially infeasible.

> Honestly not sure how your statement is relevant.

How is you bringing up synthetics relevant to race to idle?

Regardless, a number of things can be done on Strix Halo to improve the performance, first would be switching to some optimized Linux distro, or at least the kernel. That would claw back 5-20% depending on the task. It would also improve single core efficiency, I've seen my 7945hx drop from 14-15w idle on Windows to about 7-8 on Linux, because Windows likes to jerk off the CCDs non stop and throw the tasks around willy nilly which causes the second CCD and I/O die to never properly idle.

  And where is LNL now? How's the company that produced it? Even under Pat Gelsinger they said that LNL is a one off and they're not gonna make any more of them. It's commercially infeasible.
Why does it matter that LNL is bad economically? LNL shows that it's definitely possible to achieve same idle or even better idle wattage than Apple Silicon.

  How is you bringing up synthetics relevant to race to idle?
I truly don't understand what you mean.

> All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.

A good demonstration is the Android kernel. By far the biggest difference between it and the stock Linux kernel is power management. Many subsystems down to the process scheduler are modified and tuned to improve battery life.

And the more relevant case for laptops is macOS, which is heavily optimized for battery life and power draw in ways that Linux just isn't, neither is Windows. A lot of the problems here can't actually be fixed by intel, amd, or anyone designing x86 laptops because getting that level of efficiency requires the ability to strongly lead the app developer community. It also requires highly competent operating system developers focusing on the issue for a very long time, and being able to co-design the operating system, firmware and hardware together. Microsoft barely cares about Windows anymore, the Linux guys only care about servers since forever, and that leaves Apple alone in the market. I doubt anything will change anytime soon.

>And the more relevant case for laptops is macOS, which is heavily optimized for battery life and power draw in ways that Linux just isn't, neither is Windows.

What are some examples of power draw savings that Linux is leaving on the table?

Power efficiency is very important to servers too, for cost instead of for battery life. But, energy is energy. Thus, I suspect that the power draw is in userland systems that are specific to desktop, like desktop environments. Thus, using a simpler desktop environment may be worthwhile.

It's important but not relative to performance. Perf/watt thinking has a much longer history in mobile and laptop spaces. Even in servers most workloads haven't migrated to ARM.

I used Ubuntu around 2015 - 2018 and got hit with a nasty defect around gnome online accounts integrations (please correct me if the words are wrong here). For some reason, it got stuck in a loop or a bad state on my machine. I have since then decided that I will never add any of my online accounts, Facebook, Google, or anything to Gnome.

If x86 just officially said “we’re cutting off 32-bit legacy” one day (similar to how Apple did), they could toss out 95% of the crap that makes them power inefficient. Just think of the difference dropping A10 offered for memory efficiency.

“Modern Standby” could be made to actually work, ACPI states could be fixed, a functional wake-up state built anew, etc. Hell, while it would allow pared down CPUs, you could have a stop-gap where run mode was customized in firmware.

Too much credit is given to Apple for “owning the stack” and too little attention to legacy x86 cruft that allows you to run classic Doom and Commander Keen on modern machines.

>If x86 just officially said “we’re cutting off 32-bit legacy” one day (similar to how Apple did), they could toss out 95% of the crap that makes them power inefficient.

Where do you get this from? I could understand that they could get rid of the die area devoted to x86 decoding, but as I understand it x86 and x86-64 instructions get interpreted by the same execution units, which are bitness blind. What makes you think it's x86 support that's responsible for the vast majority of power inefficiency in x86-64 processors?

Intel has proposed APX to address this. It does away with some of the 32-bit garbage that complicates design for no good payoff. Most importantly, it increases from 16 to 32 registers and allows 3-register instructions (almost all x86 instructions are 1-register or 2-register instructions). This would strip out tons of MOV instructions which was proven with AMD64 to have a decent impact on performance.

Reduced I-Cache, uop cache, and decoder pressure would also have a beneficial impact. On the flip side, APX instructions would all be an entire byte longer than their AMD64 counterparts, so some of the benefits would be more muted than they might first appear and optimizing between 16 registers and shorter instructions vs 32 registers with longer instructions is yet another tradeoff for compilers to make (and takes another step down the path of being completely unoptimizable by humans).

>This would strip out tons of MOV instructions which was proven with AMD64 to have a decent impact on performance.

Sure, but the topic is optimizing power efficiency by removing support for an instruction set. That aside, if an instruction isn't very performant, it isn't much of an issue per se. It just means it won't get used much and so chip design resources will be suboptimally allocated. That's a problem for Intel and AMD, and for nobody else.

From what I understood. It's not "32-bit instructions" that are the problem. It's a load of crap associated with those 32-bit processors. There's more to x86 than just the instruction set. Operating systems need to carry the baggage in x86 if they want to allow users to run on old and new processors.

Before addressing anything else, "software is complicated by having to support legacy stuff" is not a valid argument for removing that support at the hardware level. If a software developer wishes to design their software without that legacy support, that's their prerogative.

>Operating systems need to carry the baggage in x86 if they want to allow users to run on old and new processors.

What do you mean by this exactly? Are you talking about hybrid execution like WOW64, or simple multi-platform support like the Linux kernel?

WOW64 is irrelevant as far as power efficiency is concerned if the user doesn't run any x86 software. If the user is running x86 software, that's a reason not to remove that support.

Multi-platform support shouldn't have an effect on power efficiency, beyond complicating the design of the system. Saying that the Linux kernel should stop supporting x86 so x86-64 can be more power-efficient is like saying that it should stop supporting... whatever, PowerPC, for that same reason. It's a non sequitur.

> “Modern Standby” could be made to actually work, ACPI states could be fixed, a functional wake-up state built anew, etc. Hell, while it would allow pared down CPUs, you could have a stop-gap where run mode was customized in firmware.

I'm confused, how is any of this related to "x86" and not the diverse array of third party hardware and software built with varying degrees of competence?

It's a shame they are so bad at upstreaming stuff, and run on older kernels (which in turn makes upstreaming harder).

[deleted]

> It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding.

To be fair, usually the linux itself has hardware acceleration available but the browser vendors tend to disable gpu rendering except on controlled/known perfectly working combinations of OS/Hardware/Drivers and they have much less testing in Linux. In most case you can force enabling gpu rendering in about:config and try it out yourself and leave it unless you get recurring crashes.

The only browser I’ve ever had issues with enabling video acceleration on Linux is Firefox.

All the Blink-based ones just work as long the proper libraries are installed and said libraries properly detect hardware support.

When I enabled HW acceleration on my Linux laptop to see how much it would improve battery life in Linux, my automated test (which is basically just browsing Reddit) would start crashing every 20 minutes or so.

I run Fedora and for legal reasons, they ship a version that has this problem. Have you tried Mozilla's Flatpak build? I use it instead and it resolves all my problem.

I once saw a high resolution CPU graph of a video playing in Safari. It was completely dead except for a blip every 1/30th of a second.

Incredible discipline. The Chrome graph in comparison was a mess.

Safari team explicitly targets perf as a target. I just wish they weren't so bad about extensions and adblock and I'd use it as my daily driver. But those paper cuts make me go back to chromium browsers all the time.

I find Orion has similar power efficiency but avoids those papercuts: https://kagi.com/orion/

I disable turbo boost in cpu on linux. Fans rarely start on the laptop and the system is generally cool. Even working on development and compilation I rarely need the extra perf. For my 10yr old laptop I cap max clock to 95% too to stop the fans from always starting. YMMV

Hell, Apple CPU's are even optimized for Apple software GC calls like Retain/Release objects. It seems if you want optimal performance and power efficiency, you need to own both hardware and software.

Looks like general purpose CPUs are on the losing train.

Maybe Intel should invent desktop+mobile OS and design bespoke chips for those.

> Apple CPU's are even optimized for Apple software GC calls like Retain/Release objects.

I assume this is referring to the tweet from the launch of the M1 showing off that retaining and releasing an NSObject is like 3x faster. That's more of a general case of the ARM ISA being a better fit for modern software than x86, not some specific optimization for Apple's software.

x86 was designed long before desktops had multi-core processors and out-of-order execution, so for backwards compatibility reasons the architecture severely restricts how the processor is allowed to reorder memory operations. ARM was designed later, and requires software to explicitly request synchronization of memory operations where it's needed, which is much more performant and a closer match for the expectations of modern software, particularly post-C/C++11 (which have a weak memory model at the language level).

Reference counting operations are simple atomic increments and decrements, and when your software uses these operations heavily (like Apple's does), it can benefit significantly from running on hardware with a weak memory model.

> I assume this is referring to the tweet from the launch of the M1 showing off that retaining and releasing an NSObject is like 3x faster. That's more of a general case of the ARM ISA being a better fit for modern software than x86, not some specific optimization for Apple's software.

It's not really even the ISA, mainly the implementation. Atomics on Apple cores are 3x faster than Intel (18 cycles back to back latency vs 6). AMD's atomics have 6 cycle latency.

  It seems if you want optimal performance and power efficiency, you need to own both hardware and software.
Does Apple optimize the OS for its chips and vice versa? Yes. However, Apple Silicon hardware is just that good and that far ahead of x86.

Here's an M4 Max running macOS running Parallels running Windows when compared to the fastest AMD laptop chip: https://browser.geekbench.com/v6/cpu/compare/13494385?baseli...

M4 Max is still faster even with 14 out of 16 possible cores being used. You can't chalk that up to optimizations anymore because Windows has no Apple Silicon optimizations.

Not really sure whether it makes a difference, but the Parallel VM is running Windows Pro, while the Windows OS on ASUS Gaming Laptop is running Windows Home.

> Maybe Intel should invent desktop+mobile OS and design bespoke chips for those.

Wouldn't it be easier for Intel to heavily modify the linus kernel instead of writing their own stack?

They could even go as far as writing the sleep utilities for laptops, or even their own window manager to take advantage of the specific mods in the ISA?

Intel was working with Nokia to heavily invest into Meego OS until it was killed by Elop+Microsoft.

If it hadn't been killed, it may have become something interesting today.

they /did/ this but notice the "was" at the top of the page: https://www.clearlinux.org/

> Maybe Intel should invent desktop+mobile OS and design bespoke chips for those.

Or, contribute efficiency updates to popular open projects like firefox, chromium, etc...

[flagged]

> most of which comes down to using the CPU as little as possible.

it least on mobile platform apple advocate the other way with race to sleep - do calculation as fast as you can with powerful cores so that whole chip can go back to sleep earlier and more often take naps.

Intel stipulated the same under the name HUGI (Hurry Up and Go Idle) about 15 years ago when ultrabooks were the new hot thing.

But when Apple says it, software devs actually listen.

Peer pressure. When everybody else does it and you don't, your app sticks out like a sore thumb and makes users unhappy.

The other aspect of it is that paid software is more prevalent in macOS land, and the prices are generally higher than on Windows. But the flip side of that is that user feedback is taken more seriously.

And then Microsoft adds an animated news tracker to the left corner of the start bar, making sure the cpu never gets to idle.

Which also should mean that using that M1 machine with Linux will have Intel/AMD like experience, not the M1 with macOS experience.

Turning down the settings will get you worse experiece, especially if you turn down that they are "mostly idle". Not comparable.

Sounds like death by (2^10)-24) cuts for the x86 architecture.

I honestly don't see myself ever leaving Macbooks at this point. It's the whole package: the battery life is insane, I've literally never had a dead laptop when I needed it no matter what I'm doing or where I'm at; it runs circles around every other computer I own, save for my beastly gaming PC; the stability and consistency of MacOS, and the underlying unix arch for a lot of tooling, all the way down to the build quality being damn near flawless save for the annoying lack of ports (though increasingly, I find myself needing ports less and less).

Like, would I prefer an older-style Macbook overall, with an integrated card reader, HDMI port, ethernet jack, all that? Yeah, sure. But to get that now I have to go to a PC laptop and there's so many compromises there. The battery life isn't even in the same zip code as a Mac, they're much heavier, the chips run hot even just doing web browsing let alone any actual work, and they CREAK. Like my god I don't remember the last time I had a Windows laptop open and it wasn't making all manner of creaks and groans and squeaks.

The last one would be solved I guess if you went for something super high end, or at least I hope it would be, but I dunno if I'm dropping $3k+ either way I'd just as soon stay with the Macbook.

> Like, would I prefer an older-style Macbook overall, with an integrated card reader, HDMI port, ethernet jack, all that? Yeah, sure.

Modern MacBook pros have 2/3 (card reader and HDMI port), and they brought back my beloved MagSafe charging.

No shit! I'm still rocking the M1 Pro for personal and the M2 Air for work so I do have magsafe back for one of them at least, but just USB-C besides that.

But yeah IMHO there's just no comparison. Unless you're one of those folks who simply cannot fucking stand Mac, it's just no contest.

Even the high end ones (Razers, Asus, Surface Books, Lenovos) are mere lookalikes and don't run anywhere as well as the MacBooks. They're hot and heavy and loud and full of driver issues and discrete graphics switching headaches and of course the endless ads and AI spam of modern Windows. No comparison at all...

> Why haven’t AMD/Intel been able to catch up? Is x86 just not able to keep up with the ARM architecture? When can we expect a x86 laptop chip to match the M1 in efficiency/thermals?!

AMD kind of has, the "Max 395+" is (within 5% margin or so) pretty close to M4 Pro, on both performance and energy use. (it's in the 'Framework Desktop', for example, but not in their laptop lineup yet)

AMD/Intel hasn't surpassed Apple yet (there's no answer for the M4 Max / M3 Ultra, without exploding the energy use on the AMD/Intel side), but AMD does at least have a comparable and competitive offering.

M4 Pro was a massive step back in perf/watt over M3 Pro. To my knowledge, there aren't any M4 die shots around which has led to speculation that yields on M4 Max were predicted to be really bad, so they made the M4 Pro into a binned M4 Max, but that comes with tradeoffs like much worse leakage current.

That said Hardware Canucks did a review of the 395 in a mobile form factor (Asus ROG Flow F13) with TDP at 70w (lower than the max 120w TDP you see in desktop reviews). This lower-than-max TDP also gets you closer to the perf/watt sweet spot.

The M4 Pro scores slightly higher in Cinebench R24 despite being 10P+4E vs a full 16P cores on the 395 all while using something like a 30% less power. M4 Pro scores nearly 35% higher than the single-core R24 benchmark too. 395 GPU performance is than M4 Pro in productivity software. More specifically, they trade blows based on which is more optimized in a particular app, but AMD GPUs have way more optimizations in general and gaming should be much better with an x86 + AMD GPU vs Rosetta 2 + GPU translation layers + Wine/crossover.

M4 Pro gets around 50% better battery life for tasks like web browsing when accounting for battery size differences and more than double the battery life per watt/hr when doing something simple like playing a video. Battery life under full load is a bit better for the 395, but doing the math, this definitely involves the 395 throttling significantly down from it's 70w TDP.

I've got an AMD Ryzen 9 365 processor on my new laptop and I really like it. Huge autonomy and good performance when needed, it's comparable to the M3 version (not the Max).

I just recently was trying to buy a laptop and was looking at that chip, but like you said, not available in anything except framework desktops and a weird tablet thats 2.5x expensive as a macbook. Its competitive on paper, but is still completely infeasible at the moment.

There is only the HP ZBook Ultra G1a.

Some Chinese companies have also announced laptops with it coming out soon.

Also, you don't realize until you try them out that other issues make running models on the AMD chip ridiculously slow compared to running the same models on an M4. Some of that's software. But a lot is how the chip/memory/neural etc are organized.

Right now, AMD is not even in the ballpark.

In fact, the real kick in the 'nads was my fully kitted M4 laptop outperforming the AMD. I just gave up.

I'll keep checking in with AMD and Intel every generation though. It's gotta change at some point.

There there a few mini PCs using the 395+. Checkout the Beelink GTR9 Pro AMD Ryzen AI Max+ 395 and GMKtec EVO-X2.

The AI just made it to their laptop lineup today. The Framework 16 has either the AMD Ryzen™ AI 9 HX 370 or the AI 7 350.

https://frame.work/laptop16?tab=whats-new

you can find that processor in the 14" HP Zbook Ultra G1A (which is also Ubuntu certified). There is also the Asus Z13, though I'm not certain it's working well with Linux

This is not even a remotely accurate characterization of the relative performance of the Ryzen AI Max+ 395 and the Apple M4. I have both an expensive implementation of the former and the $499 version of the latter, and my M4 Mac mini beats the Ryzen by 80% or more in many single-threaded workloads, like browser benchmarks.

I have the same experience here with my MacBook Air M1 from 2020 with 16GB RAM and 512GB SSD. After three years, I upgraded to a MacBook Pro with M3 Pro, 36GB of RAM, and 2TB of storage. I use this as my main machine with 2 displays attached via a TB4 dock.

I'm working in IT and I get all new machines for our company over my desk to check them, and I observed the exact same points as the OP.

The new machines are either fast and loud and hot and with poor battery life, or they are slow and "warm" and have moderate battery life.

But I had no business laptop yet, ARM, AMD, or Intel, which can even compete with the M1 Air, not to speak of the M3 Pro! Not to speak about all the issues with crappy Lenovo docks, etc.

It doesn’t matter if I install Linux or Windows. The funny point is that some of my colleagues have ordered a MacBook Air or Pro and use their Windows or Linux and a virtual machine via Parallels.

Think about it: Windows 11 or Linux in a VM is even faster, snappier, more silent, and has even longer battery life than these systems native on a business machine from Lenovo, HP, or Dell.

Well, your mileage may vary, but IMHO there is no alternative to a Mac nowadays, even if you want to use Linux or Windows.

I'm still using my MacBook Air M1 with 8gb of Ram as my personal workhorse. It runs docker desktop and VSC better than my T14 whatever windows machine with 32gb ram. But that is windows and it has a bunch of enterprise stuff running. I assume it would work better with Linux, or even windows without whatever our IT does to control it.

With Nvidia Now I can even play games on it, though I wouldn't recommend it for any serious gamers.

I'm using Geforce Now on my M1 Air and it's wonderful. Yeah, i'll play competitive multiplayer on dedicated hardware (primarily Xbox Series X because i refuse to own a Windows machine and i'm too lazy for Linux right now -- also, i'm hoping against hope for a real Steam console), but Geforce Now has been wonderful for other things, survival, crafting, MMO, single player RPGs, Cyberpunk, Battlefield, pretty much anything you can deal with a few milliseconds of input latency. To be honest, what they're doing here is wizardry to my dumb brain. The additional latency, to me, just feels like the amount of latency you will get from a controller on an Xbox. However, if you play something that requires very quick input (competitive FPS, for example) AND you're connected to servers through the game with anywhere from 5ms to 100ms+ latency (playing on EU servers, for example), that added latency just becomes too much. I'll say this though: I've played Warzone solo on Geforce Now, connected to a local server with no more than 5ms latency via that connection, and it felt pretty decent. Definitely playable, and i think i got 2nd or 1st in a few of those games, but as soon as it gets over like 15-20ms, you're cooked.

Ha. Same here. My personal MBA M1/8GB just chugs along with whatever I need it to do. I have a T480 32GB linux machine at home that I love, but my M1 just does what I need it to do.

And at the shop we are doing technology refreshes for the whole dev team upgrading them to M4s. I was asked if I wanted to upgrade my M1 Pro to an M4, and I said no. Mainly because I don't want to have to move my tooling over to a new machine, but I am not bottlenecked by anything on my current M1.

Man, it's absolutely trivial to migrate your configurations to a new machine.

> there is no alternative to a Mac nowadays

I need to point this out all the time these days it seems, but this opinion is only valid if all you use is a laptop and all you care about is single core perfomance.

The computing world is far bigger than just laptops.

Big music/3d design/video editing production suites etc still benefit much more from having workstation PCs with higher PCI bandwidth, more lanes for multiple SSDs and GPUs, and high level multicore processing performance which cannot be matched by Apple silicon.

Doesn’t Apple have significant market share for pro music and video editing?

For studio movies, render farms are usually Linux but I think many workstation tasks are done on Apple machines. Or is that no longer true?

Music production is overwhelmingly Apple. It comes from the fact that Protools was Mac only until the late 2000s and Logic Pro, Apple's DAW and alternative to Protools was also very popular and also Mac only. That left Cubase for windows and a few others like Ableton and less popular DAWs like Reaper, fruity loops etc. Today there are a few more options for Windows like Studio One who is very good though

Add to that the fact that most of the audio interfaces were firewire and plug and play on mac and a real struggle on windows. With windows you also had to deal with ASIO, and once you picked your audio interface it has to be used for both inputs and outputs (still to this day) forcing you to compound interfaces with workarounds like Asio4All if you wanted to use different interfaces, while Mac os just lets you pick different interfaces for input and output

Linux had very interesting projects, unfortunately music production relies on a lot of expensive audio plugins that a lot of time come in installers and are a pain in the butt to use through proton/wine, when it's possible at all. That means that doing music production on Linux means possibly not using plugins you paid and not finding alternatives to them. It's a shame because I'd love to be able to only use Linux

> That left Cubase for windows

When I was at music college doing production courses, they exclusively taught Cubase on windows.

Yes, for a while that was the only "serious" option for Windows

Yes, and Logic Pro was generally looked at as 'My first DAW' in most studios I have been in.

Also Protools was available on Windows from 1997 and was used in many PC based studios.

I remember Logic Pro becoming quite popular after version 8, even though veterans who knew protools backwards had no reason to switch, a lot of the newer studios used logic.

You're right about protools on Windows. I got confused about protools not requiring the use of their own interfaces

Prosumer, but not pro. Pixar for example are not modelling and animating on Apple Silicon.

On the video side Vegas Pro is used in a lot of production houses, and it does not run on Apple Silicon at all.

> Well, your mileage may vary, but IMHO there is no alternative to a Mac nowadays, even if you want to use Linux or Windows.

I guess I'd slightly change that to "MacBook" or similar, as Apple are top-in-class when it comes to laptops, but for desktop they seem to not even be in the fight anymore, unless reducing power consumption is your top concern. But if you're aiming for "performance per money spent", there isn't really any alternative to non-Apple hardware.

I do agree they do the best hardware in terms of feeling though, which is important for laptops. But computing is so much larger than laptops, especially if you're always working in the same place everyday (like me).

Mac Studio is pretty good on everything except raw GPU speed. Which depending on your use cases may be completely irrelevant.

[deleted]

First, Apple did an excellent job optimizing their software stack for their hardware. This is something that few companies have the ability to do as they target a wide array of hardware. This is even more impressive given the scale of Apple's hardware. The same kernel runs on a Watch and a Mac Studio.

Second, the x86 platform has a lot of legacy, and each operation on x86 is translated from an x86 instruction into RISC-like micro-ops. This is an inherent penalty that Apple doesn't have pay, and it is also why Rosetta 2 can achieve "near native" x86 performance; both platform translate the x86 instructions.

Third, there are some architectural differences even if the instruction decoding steps are removed from the discussion. Apple Silicon has a huge out-of-order buffer, and it's 8-wide vs x86 4-wide. From there, the actual logic is different, the design is different, and the packaging is different. AMD's Ryzen AI Max 300 series does get close to Apple by using many of the same techniques like unified memory and tossing everything onto the package, where it does lose is due to all of the other differences.

In the end, if people want crazy efficiency Apple is a great answer and delivers solid performance. If people want the absolute highest performance, then something like Ryzen Threadripper, EPYC, or even the higher-end consumer AMD chips are great choices.

This seems mostly misinformed.

1) Apple Silicon outperforms all laptop CPUs in the same power envelope on 1T on industry-standard tests: it's not predominantly due to "optimizing their software stack". SPECint, SPECfp, Geekbench, Cinebench, etc. all show major improvements.

2) x86 also heavily relies on micro-ops to greatly improve performance. This is not a "penalty" in any sense.

3) x86 is now six-wide, eight-wide, or nine-wide (with asterisks) for decode width on all major Intel & AMD cores. The myth of x86 being stuck on four-wide has been long disproven.

4) Large buffers, L1, L2, L3, caches, etc. are not exclusive to any CPU microarchitecture. Anyone can increase them—the question is, how much does your core benefit from larger cache features?

5) Ryzen AI Max 300 (Strix Halo) gets nowhere near Apple on 1T perf / W and still loses on 1T perf. Strix Halo uses slower CPUs versus the beastly 9950X below:

Fanless iPad M4 P-core SPEC2017 int, fp, geomean: 10.61, 15.58, 12.85 AMD 9950X (Zen5) SPEC2017 int, fp, geomean: 10.14, 15.18, 12.41 Intel 285K (Lion Cove) SPEC2017 int, fp, geomean: 9.81, 12.44, 11.05

Source: https://youtu.be/2jEdpCMD5E8?t=185, https://youtu.be/ymoiWv9BF7Q?t=670

The 9950X & 285K eat 20W+ per core for that 1T perf; the M4 uses ~7W. Apple has a node advantage, but no node on Earth gives you 50% less power.

There is no contest.

>> x86 is now six-wide, eight-wide, or nine-wide (with asterisks) for decode width on all major Intel & AMD cores. The myth of x86 being stuck on four-wide has been long disproven.

From the AMD side it was 4 wide until Zen 5. And now it's still 4 wide, but there is a separate 4-wide decoder for each thread. The micro-op cache can deliver a lot of pre-decoded instructions so the issue width is (I dunno) wider but the decode width is still 4.

1. Apple’s optimizations are one point in their favor. XNU is good, and Apple’s memory management is excellent.

2. X86 micro-ops vs ARM decode are not equivalent. X86’s variable length instructions make the whole process far more complicated than it is on something like ARM. This is a penalty due to legacy design.

3. The OP was talking about M1. AFAIK, M4 is now 10-wide, and most x86 is 6-wide (Ryzen 5 does some weird stuff). X86 was 4-wide at the time of M1’s introduction.

4. M1 has over 600 reorder buffer registers… it’s significantly larger than competitors.

5. Close relative to x86 competitors.

[deleted]

2. uops are a cope that costs. That uop cache and cache controller uses tons of power. ARM designs with 32-bit support had a uop cache, but they cut it when going to 64-bit only designs (look at ARM a715 vs a710) which dramatically reduced frontend size and power consumption.

3. The claim was never "stuck on 4-wide", but that going wider would incur significant penalties which is the case. AMD uses two 4-wide encoders and pays a big penalty in complexity trying to keep them coherent and occupied. Intel went 6-wide for Golden Cove which is infamous for being the largest and most power-hungry x86 design in a couple decades. This seems to prove the 4-wide people right.

4. This is only partially true. The ISA impacts which designs make sense which then impacts cache size. uop cache can affect L1 I-cache size. Page size and cache line size also affect L1 cache sizes. Target clockspeeds and cache latency also affect which cache sizes are viable.

> 2) x86 also heavily relies on micro-ops to greatly improve performance. This is not a "penalty" in any sense.

It's an energy penalty, even if wall clock time improves.

A whole lot of bluster in this thread but finally someone whose actually doing their research chimes in. Thank you for giving me a place to start in understanding why this is so deeply a mystery!

Apple CPUs do decode instructions into micro-ops.

https://dougallj.github.io/applecpu/firestorm.html

> Second, the x86 platform has a lot of legacy, and each operation on x86 is translated from an x86 instruction into RISC-like micro-ops. This is an inherent penalty that Apple doesn't have pay, and it is also why Rosetta 2 can achieve "near native" x86 performance; both platform translate the x86 instructions.

Can we please stop with this myth? Every superscalar processor is doing the exact same thing, converting the ISA into the µops (which may involve fission or fusion) that are actually serviced by the execution units. It doesn't matter if the ISA is x86 or ARM or RISC-V--it's a feature of the superscalar architecture, not the ISA itself.

The only reason that this canard keeps coming out is because the RISC advocates thought that superscalar was impossible to implement for a CISC architecture and x86 proved them wrong, and so instead they pretend that it's only because x86 somehow cheats and converts itself to RISC internally.

> they pretend that it's only because x86 somehow cheats and converts itself to RISC internally.

Which hasn't even been the case anymore for several years now. Some µOPs in modern x86-64 cores combine memory access with arithmetic operations, making them decidedly non-RISC.

[deleted]

There’s a number of reasons, all of which in concert create the appearance of a performance gap between the two:

* Apple has had decades optimizing its software and hardware stacks to the demands of its majority users, whereas Intel and AMD have to optimize for a much broader scope of use cases.

* Apple was willing to throw out legacy support on a regular basis. Intel and AMD, by comparison, are still expected to run code written for DOS or specific extensions in major Enterprises, which adds to complexity and cost

* The “standard” of x86 (and demand for newly-bolted-on extensions) means effort into optimizations for efficiency or performance meet diminishing returns fairly quickly. The maturity of the platform also means the “easy” gains are long gone/already done, and so it’s a matter of edge cases and smaller tweaks rather than comprehensive redesigns.

* Software in x86 world is not optimized, broadly, because it doesn’t have to be. The demoscene shows what can be achieved in tight performance envelopes, but software companies have never had reason to optimize code or performance when next year has always promised more cores or more GHz.

It boils down to comparing two different products and asking why they can’t be the same. Apple’s hardware is purpose-built for its userbase, operating systems, and software; x86 is not, and never has been. Those of us who remember the 80s and 90s of SPARC/POWER/Itanium/etc recall that specialty designs often performed better than generalist ones in their specialties, but lacked compatibility as a result.

The Apple ARM vs Intel/AMD x86 is the same thing.

Intel chose and stuck with backcompat as a strategy. They could, tomorrow, split their designs into legacy hardware and modern hardware. They didn’t, but Apple has done breaking generational change many times.

Apple also has a particular advantage in owning the os and having the ability to force independent developers to upgrade their software, which make incompatible updates (including perf optimizations) possible.

[deleted]

Intel also wanted to break backcompat and start fresh with Itanium but it failed.

Fair enough, but Apple Silicon is not a specialist chip in the way a SPARC chip was. It's a general purpose SoC & SiP stack. There is nothing stopping Intel being able to invest in SoC & SiP and being able to maintain backward compatibility while providing much better power/performance for a mobile (including laptop and tablet), product strategy.

They could also just sit down with Microsoft and say "Right, we're going to go in an entirely different direction, and provide you with something absolutely mind-blowing, but we're going to have to do software emulation for backward compatibility and that will suck for a while until things get recompiled, or it'll suck forever if they never do".

Apple did this twice in the last 20 years - once on the move from PowerPC chips to Intel, and again from Intel to Apple Silicon.

If Microsoft and enough large OEMs (Dell, etc.), thought there was enough juice in the new proposed architecture to cause a major redevelopment of everything from mobile to data centre level compute, they'd line right up, because they know that if you can significantly reduce the amount of power consumption while smashing benchmarks, there are going to long, long wait times for that hardware and software, and its pay day for everyone.

We now know so much more about processor design, instruction set and compiler design than we did when the x86 was shaping up, it seems obvious to me that:

1. RISC is a proven entity worth investing in

2. SoC & SiP is a proven entity worth investing in

3. Customers love better power/performance curves at every level from the device in their pocket to the racks in data centres

4. Intel is in real trouble if they are seriously considering the US government owning actual equity, albeit proposed as non-voting, non-controlling

Intel can keep the x86 line around if they want, but their R&D needs to be chasing where the market is heading - and fast - while bringing the rest of the chain along with them.

> Right, we're going to go in an entirely different direction, and provide you with something absolutely mind-blowing, but we're going to have to do software emulation for backward compatibility and that will suck for a while until things get recompiled, or it'll suck forever if they never do

For an example of why this doesnt work, see 'Intel Itanium'.

That's because the direction they took was awful. That does not mean other directions do not exist right now that they could raise money for and invest in.

The alternative is death - they do nothing, they're going to die.

Which option do you think they should take?

> The alternative is death - they do nothing, they're going to die.

Thats a subjective opinion. Plenty of people still value higher power multi core chips over apple silicon, because they are still better at doing real work. I dont think they need to go in a new direction personally, but I was just showing an example of why your provided solution is not a silver bullet.

It’s a bit unfair to say apple threw out backwards compatibility.

Each time they had a pretty good emulation story to keep most stuff (certainly popular stuff) working through a multi-year transition period.

IMO, this is better then carrying around 40 years of cruft.

This was absolutely not the case for 32-bit iOS apps, which they dropped from one year to the next like a hot potato. I still mourn the loss of some of the apps.

Apple purposely make it so after 3 new versions of the OS you cannot upgrade the OS on the hardware any further. This in turn means you cannot install new software as the applications themselves require the newer versions of the OS. It has been this way on apple hardware for decades, and has laid the foundation of not ever needing to provide backwards compatibility for more than a few years as well as forcing new hardware purchases. The 'emulation story' only needs to work for a couple of generations, then it itself can be sunsetted and is not expected to be backwards compatible with newer OSes. It is also the reason it is pretty much impossible to upgrade CPUs in Apple machines.

> IMO, this is better then carrying around 40 years of cruft.

Backwards compatibility is such a strong point, it is why windows survives even though it has become a bloated ad riddled mess. You can argue which is better, but that seriously depends on your requirements. If you have a business application coded 30 years ago on x86 that no developer in your company understands any more, then backwards compatibility is king. On the other end of the spectrum if you are happy to be purchasing new software subscriptions constantly and having bleeding edge hardware is a must for you, then backwards compatibility probably isnt required.

> Apple purposely make it so after 3 new versions of the OS you cannot upgrade the OS on the hardware any further.

A new major version of macOS comes out every year. The oldest Mac still supported by the upcoming macOS 26 is from 2019.

Wow, 6 years!

> Apple purposely make it so after 3 new versions of the OS you cannot upgrade the OS on the hardware any further

"oh a post about Apple, let me come in and share my hatred for Apple again by outright lying!"

As stated already, macOS 26 runs on the M1 and even the 2019 Macbook Pro. So i think i know where you got the "3 new versions" figure, and it's a dark and smelly place.

Apologies I was under the impression that the major OS release was every 2 years, and so I equated 6 years into 3 releases. No need to be quite so rude when you could just factually correct.

However My parents 2017 Macbook pro can only upgrade to Ventura, which is a 2022 release. 5 years and that $2.5k baby was obselete. However rude you are about your defense of Apple, 5-6 years until software starts being unable to install is pretty shitty. I use 30 year old apps daily on windows with no issue.

Looks like defending Apple is the smelly place to be judging by your tone and condescending snark.

>Apple purposely make it so after 3 new versions of the OS you cannot upgrade the OS on the hardware any further.

This is false.

Apologies I meant 5-6 years, with a release every 2. Turns it its every year so I was wrong,

> Apple has had decades optimizing its software and hardware stacks to the demands of its majority users, whereas Intel and AMD have to optimize for a much broader scope of use cases.

But as you mention - they've at multiple times changed the underlying architecture, which surely would render å large part of prior optimizations obsolete?

> Software in x86 world is not optimized, broadly, because it doesn’t have to be.

Do ARM software need optimization more than x86?

That sure sounds more like the reality of a performance gap than the appearance of one.

The broader audience/apples to oranges bit is fair. We're not choosing apple hardware for server. x64 is still dominant on the server with some cheap custom arm chips as an option, no?

[deleted]

Sure, but that’s very different than the context f the original question.

[deleted]

I don't think backcompat is that big of a deal, since old DOS programs also don't take any compute power to run too and apple has shown layers like rosetta work fine.

I generally agree but what's Qualcomm's excuse?

> Software in x86 world is not optimized, broadly, because it doesn’t have to be. The demoscene shows what can be achieved in tight performance envelopes, but software companies have never had reason to optimize code or performance when next year has always promised more cores or more GHz.

This is why I get so livid regarding Electron apps on the Mac.

I’m never surprised by developer-centric apps like Docker Desktop — those inclined to work on highly technical apps tend not to care much about UX — but to see billion-dollar teams like Slack and 1Password indulge in this slop is so disheartening.

> might be my Linux setup being inefficient

Given that videos spin up those coolers, there is actually a problem with your GPU setup on Linux, and I expect there'd be an improvement if you managed to fix it.

Another thing is that Chrome on Linux tends to consume exorbitant amount of power with all the background processes, inefficient rendering and disk IO, so updating it to one of the latest versions and enabling "memory saving" might help a lot.

Switching to another scheduler, reducing interrupt rate etc. probably help too.

Linux on my current laptop reduced battery time x12 compared to Windows, and a bunch of optimizations like that managed to improve the situation to something like x6, i.e. it's still very bad.

> Is x86 just not able to keep up with the ARM architecture?

Yes and no. x86 is inherently inefficient, and most of the progress over last two decades was about offloading computations to some more advanced and efficient coprocessors. That's how we got GPUs, DMA on M.2 and Ethernet controllers.

That said, it's unlikely that x86 specifically is what wastes your battery. I would rather blame Linux, suspect its CPU frequency/power drivers are misbehaving on some CPUs, and unfortunately have no idea how to fix it.

> x86 is inherently inefficient

Nothing in x86 prohibits you from an implementation less efficient than what you could do with ARM instead.

x86 and ARM have historically served very different markets. I think the pattern of efficiency differences of past implementations is better explained by market forces rather than ISA specifics.

x12 and x6 do not seem plausible. Something is very wrong.

These figures are very plausible. Most Linux distros are terribly inefficient by default.

Linux can actually meet or even exceed Window's power efficiently, at least at some tasks, but it takes a lot of work to get there. I'd start with powertop and TLP.

As usual, the Arch wiki is a good place to find more information: https://wiki.archlinux.org/title/Power_management

Those numbers would imply <1h runtime, or a >50W consumption at idle (for typical battery capacities). That's insane.

I've used Linux laptops since ~2007, and am well aware of the issues. 12x is well beyond normal.

At least on Thinkpads over the years, I've never seen anything remotely close to that either. I've had my Thinkpad x260 power draw down to 2.5 watts at idle, and around 4 or 5 watts with a browser and a few terminals open. That was back in 2018! With the hot-swappable battery on the back, I could go for 24 hours of active use without concern.

I get below 5W at idle (ff and emacs open, screen at indoor brightness, wifi on) on my gen11 framework. Going from 8 to 5 required some tinkering.

I don't think I ever saw 50W at all, even under load; they probably run an Ultra U1xxH, permanently turbo-boosted.

For some reason. Given the level of tinkering (with schedulers and interrupt frequencies), it's likely self-imposed at this point, but you never know.

My CPU is at over 5GHz, 1% load and 70C at the moment. That's in a "power-saving mode".

If nothing would be wrong, it'd be at something like 1.5GHz with most of the cores unpowered.

Something is wrong with power governor then. I have an opposite experience, was able to tune Linux on a Core Ultra 155H laptop so it works longer than Windows one. Needed to use kernel 6.11+ and TLP [0] with pretty aggressive energy saving settings. Also played a bit with Intel LPMD [1] but did not notice much improvement.

[0] https://github.com/linrunner/TLP

[1] https://github.com/intel/intel-lpmd

What is the laptop, and what's it doing?

What p-state driver are you using?

They're big, expensive chips with a focus on power efficiency. AMD and Intel's chips that are on the big and expensive side tend toward being optimized for higher power ranges, so they don't compete well on efficiency, while their more power efficient chips tend toward being optimized for size/cost.

If you're willing to spend a bunch of die area (which directly translates into cost) you can get good numbers on the other two legs of the Power-Performance-Area triangle. The issue is that the market position of Apple's competitors is such that it doesn't make as much sense for them to make such big and expensive chips (particularly CPU cores) in a mobile-friendly power envelope.

Per core, Apple’s Performance cores are no bigger than AMD’s Zen cores. So it’s a myth that they’re only fast and efficient because they are big.

What makes Apple silicon chips big is they bolt on a fast GPU on it. If you include the die of a discrete GPU with an x86 chip, it’d be the same or bigger than M series.

You can look at Intel’s Lunar Lake as an example where it’s physically bigger than an M4 but slower in CPU, GPU, NPU and has way worse efficiency.

Another comparison is AMD Strix Halo. Despite being ~1.5x bigger than the M4 Pro, it has worse efficiency, ST performance, and GPU performance. It does have slightly more MT.

Is it not true that the instruction decoder is always active on x86, and is quite complex?

Such a decoder is vastly less sophisticated with AArch64.

That is one obvious architectural drawback for power efficiency: a legacy instruction set with variable word length, two FPUs (x87 and SSE), 16-bit compatibility with segmented memory, and hundreds of otherwise unused opcodes.

How much legacy must Apple implement? Non-kernel AArch32 and Thumb2?

Edit: think about it... R4000 was the first 64-bit MIPS in 1991. AMD64 was introduced in 2000.

AArch64 emerged in 2011, and in taking their time, the designers avoided the mistakes made by others.

There's no AArch32 or Thumb support (A32/T32) on M-series chips. AArch64 (technically A64) is the only supported instruction set. Fun fact: this makes it impossible to run Mario Kart 8 via virtualization on Macs without software translation, since it's A32.

How much that does for efficiency I can't say, but I imagine it helps, especially given just how damn easy it is to decode.

It actually doesn't make much difference: https://chipsandcheese.com/i/138977378/decoder-differences-a...

I had not realized that Apple did not implement any of the 32-bit ARM environment, but that cuts the legs out of this argument in the article:

"In Anandtech’s interview, Jim Keller noted that both x86 and ARM both added features over time as software demands evolved. Both got cleaned up a bit when they went 64-bit, but remain old instruction sets that have seen years of iteration."

I still say that x86 must run two FPUs all the time, and that has to cost some power (AMD must run three - it also has 3dNow).

Intel really couldn't resist adding instructions with each new chip (MMX, PAE for 32-bit, many more on this shorthand list that I don't know), which are now mostly baggage.

> I still say that x86 must run two FPUs all the time, and that has to cost some power (AMD must run three - it also has 3dNow).

Legacy floating-point and SIMD instructions exposed by the ISA (and extensions to it) don't have any bearing on how the hardware works internally.

Additionally, AMD processors haven't supported 3DNow! in over a decade -- K10 was the last processor family to support it.

Oh wow, I need to dig way deeper into this but wonderful resource - thanks!

> Despite being ~1.5x bigger than the M4 Pro

Where are you getting M4 die sizes from?

It would hardly be surprising given the Max+ 395 has more, and on average, better cores fabbed with 5nm unlike the M4's 3nm. Die size is mostly GPU though.

Looking at some benchmarks:

> slightly more MT.

AMD's multicore passmark score is more than 40% higher.

https://www.cpubenchmark.net/compare/6345vs6403/Apple-M4-Pro...

> worse efficiency

The AMD is an older fab process and does not have P/E cores. What are you measuring?

> worse ST performance

The P/E design choice gives different trade-offs e.g. AMD has much higher average single core perf.

> worse GPU performance

The AMD GPU:

14.8 TFLOPS vs. M4 Pro 9.2 TFLOPS.

19% higher 3D Mark

34% higher GeekBench 6 OpenCL

Although a much crappier Blender score. I wonder what that's about.

https://nanoreview.net/en/gpu-compare/radeon-8060s-vs-apple-...

  Where are you getting M4 die sizes from?
M1 Pro is ~250mm2. M4 Pro likely increased in size a bit. So I estimated 300mm2. There are no official measurements but should be directionally correct.

  AMD's multicore passmark score is more than 40% higher.
It's an out of date benchmark that not even AMD endorses and the industry does not use. Meanwhile, AMD officially endorses Cinebench 2024 and Geekbench. Let's use those.

   The AMD is an older fab process and does not have P/E cores. What are you measuring?
Efficiency. Fab process does not account for the 3.65x efficiency deficit. N4 to N3 is roughly ~20-25% more efficient at the same speed.

  The P/E design choice gives different trade-offs e.g. AMD has much higher average single core perf.
Citation needed. Further more, macOS uses P cores for all the important tasks and E cores for background tasks. I fail to see why even if AMD has a higher average ST would translate to better experience for users.

  14.8 TFLOPS vs. M4 Pro 9.2 TFLOPS.
TFLOPs are not the same between architectures.

  19% higher 3D Mark
Equal in 3DMark Wildlife, loses vs M4 Pro in Blender.

  34% higher GeekBench 6 OpenCL
OpenCL has long been deprecated on macOS. 105727 is the score for Metal, which is supported by macOS. 15% faster for M4 Pro.

The GPUs themselves are roughly equal. However, Strix Halo is still a bigger SoC.

> TFLOPs are not the same between architectures.

Shouldn't they be the same if we are speaking about same precision? For example, [0] shows M4 Max 17 TFLOPS FP32 vs MAX+ 395 29.7 TPLOFS FP32 - not sure what exact operation was measured but at least it should be the same operation. Hard to make definitive statements without access to both machines.

[0] https://www.cpu-monkey.com/en/compare_cpu-apple_m4_max_16_cp...

M4 Max doesn't even disclose TFLOPS so no clue where that website got the numbers from.

TFLOPS can't be measured the same between generations. For example, Nvidia often quotes sparsity TFLOPS which doubles the dense TFLOPS previously reported. I think AMD probably does the same for consumer GPUs.

Another example is Radeon RX Vega 64 which had 12.7 TFLOPS FP32. Yet, Radeon RX 5700 XT with just 9.8 TFLOPS FP32 absolutely destroyed it in gaming.

What a waste of time.

"directionally correct"... so you don't know and made up some numbers? Great.

AMD doesn't "endorse benchmarks" especially not fucking Geekbench for multi-core. No-one could because it's famously nonsense for higher core counts. AMD's decade old beef with Sysmark was about pro-Intel bias.

  "directionally correct"... so you don't know and made up some numbers? Great.
I never said it was exactly that size. Apple keeps the sizes of their base, Pro, and Max chips fairly consistent over generations.

Welcome to the world of chip discussions. I've never taken apart and M4 Pro computer and measured the die myself. It appears no one has on the internet. However, we can infer a lot of it based on previously known facts. In this case, we know M1 Pro's die size is around 250mm2.

  AMD doesn't "endorse benchmarks" especially not fucking Geekbench for multi-core. No-one could because it's famously nonsense for higher core counts. AMD's decade old beef with Sysmark was about pro-Intel bias.
Geekbench is the main benchmark AMD tends to use: https://videocardz.com/newz/amd-ryzen-5-7600x-has-already-be...

The reason is because Geekbench correlates highly with SPEC, which is the industry standard.

Their "main benchmark"? Stop making things up. It's no more than tragic fanboy addled fraud at this point.

That three-year old press-release refers to SINGLE CORE Geekbench and not the defective multicore version that doesn't scale with core counts. Given AMD's main USP is core counts it would be an... unusual choice.

AMD marketing uses every other product under the sun too (no doubt whatever gives the better looking numbers)... including Passmark e.g. it's on this Halo Strix page:

https://www.amd.com/en/products/processors/ai-pc-portfolio-l...

So I guess that means Passmark is "endorsed" by AMD too eh? Neat.

The industry has moved past Passmark because it does not correlate to actual real world performance.

The standard is SPEC, which correlates with with Geekbench.

https://medium.com/silicon-reimagined/performance-delivered-...

Every time there is a discussion on Apple Silicon, some uninformed person always brings up Passmark, which is completely outdated.

Enough. You don't know what you are talking about.

What's with posting 5 year old medium articles about a different version of Geekbench? Geekbench 5 had different multicore scaling so if you want to argue that version was so great then you are also arguing against Geekbench 6 because they don't even match.

https://www.servethehome.com/a-reminder-that-geekbench-6-is-...

"AMD Ryzen Threadripper 3995WX, a huge 64 core/ 128 thread part, was performing at only 3-4x the rate of an Intel D-1718T quad-core part, even despite the fact it had 16x the core count and lots of other features."

"With the transition from Geekbench 5 to Geekbench 6, the focus of the Primate Labs team shifted to smaller CPUs"

GB6 measures MT the way most consumer applications use MT. GB5 was embarrassingly parallel. It reflects real world usage more.

Your source is an article based on someone finding a Geekbench result for a just released CPU and you somehow try to say its from AMD itself and its an endorsed benchmark, huh.

Those are AMD's marketing slides.

[flagged]

I’ve been thinking a lot about getting something from Framework, as I like their ethos around relatability. However, I currently have an M1 Pro which works just fine, so I’ve been kicking the can down the road while worrying that it just won’t be up to par in terms of what I’m used to from Apple. Not just the processor, but everything. Even in the Intel Mac days, I ended up buying a Asus Zephyrus G14, which had nothing but glowing reviews from everyone. I hated it and sold it within 6 months. There is a level of polish that I haven’t seen on any x86 laptop, which makes it really hard for me to venture outside of Apple’s sandbox.

I recently upgraded from an M1 mac book pro 15", which I was pretty happy with, to the M4 max pro 16". I've been extremely impressed with the new laptop. The key metric I use to judge performance is build speed for our main project. It's a thing I do a few dozen times per day. The M1 took about four minutes to run our integration tests. I should add that those tests run in parallel and make heavy use of docker. There are close to 300 integration tests and a few unit tests. Each of those hit the database, Redis, and Elasticsearch. The M4 Pro dropped that to 40 seconds. Each individual test might take a few seconds. It seems to be benefiting a lot from both the faster CPU with lots of cores and the increased amount of memory and memory bandwidth. Whatever it is, I'm seriously impressed with this machine. It costs a lot new but on a three year lease, it boils down to about 100 euros per month. Totally worth it for me. And I'm kind of kicking myself for not upgrading earlier.

Before the M1, I was stuck using an intel core i5 running arch linux. My intel mac managed to die months before the M1 came out. Let's just say that the M1 really made me appreciate how stupidly slow that intel hardware is. I was losing lots of time doing builds. The laptop would be unusable during those builds.

Life is too short for crappy hardware. From a software point of view, I could live with Linux but not with Windows. But the hardware is a show stopper currently. I need something that runs cool and yet does not compromise on performance. And all the rest (non-crappy trackpad, amazingly good screen, cool to the touch, good battery life, etc.). And manages to look good too. I'm not aware of any windows/linux laptop that does not heavily compromise on at least a few of those things. I'm pretty sure I can get a fast laptop. But it'd be hot and loud and have the unusable synaptics trackpad. And a mediocre screen. Etc. In short, I'd be missing my mac.

Apple is showing some confidence by just designing a laptop that isn't even close to being cheap. This thing was well over 4K euros. Worth every penny. There aren't a lot of intel/amd laptops in that price class. Too much penny pinching happening in that world. People think nothing of buying a really expensive car to commute to work. But they'll cut on the thing that they use the whole day when they get there. That makes no sense whatsoever in my view.

The M4 was the first chip that tempted me to upgrade from the M1, which I think is the case for most people. At work, I’m at the mercy of the corporate lease. My personal Mac doesn’t get used in a way where I’ll see a major change, so I’m giving it a while longer.

I’ve actually been debating moving from the Pro to the Air. The M4 is about on par with the M1 Pro for a lot of things. But it’s not that much smaller, so I’d be getting a lateral performance move and losing ports, so I’m going to wait and see what the future holds.

Considering the amount of engineering that goes into Apple's laptops, and compared to other professional tools, 4000 EUR is extremely cheap. Other tradespeople have to spend 10x more.

I'm in the same boat. Still running an MBP M1 Pro 14". Luckily I bought with 32GB in 2021 when it came out so it can run all things docker similar to your setup. I recently ran a production like workload, real stress test, it was the first time I had the fan spinning constantly but it was still responsive and a pleasure to use (and sit next to!) for a few hours.

I've been window shopping for a couple of months now, have test run Linux and really liking the experience there (played on older Intel hardware). I am completely de-appled software-wise, with the 1 exception of iMessages because of my kids using ipads. But that's really about it. So, I'm ready to jump.

But so far, all my research hasn't lead to anything where I would be convinced not to regret in the end. A desktop Ryzen 7700 or 9600X would probably suffice, but it would mean I need to constantly switch machines and I'm not sure if I'm ready for that. All mobile non-macs have significant downsides and you can't even try before you buy anywhere typically. So you'd be relying on reviews. But everybody has a different tolerance for changes like track pad haptics, thermals, noise, screen quality etc. So, those reviews don't give enough confidence. I've had 13 Apple years so far. First 5 were pleasant, next 3 really sucked but since Apple silicon I feel I have totally forgotten all the suffering in the non-Apple world and with those noisy, slow Intel Macs.

I think it has to boil down to serious reasons why the Apple hardware is not fit for one's purpose. Be it better gaming, extreme amount of storage, insane amount of RAM, all while ignoring the value of "the perfect package" and it's low power draw, low noise etc. Something that does not make one regret the change. DHH has done it and so have others, but he switched to Framework Desktop AI Max. So it came with a change in lifestyle. And he also does gaming, that's another good reason (to switch to Linux or dual boot (as he mentioned Fortnite)).

I don't have such reasons currently. Unless we see hardware that is at least as fast and enjoyable like the M1 Pro or higher. I tried Asahi but it's quite cumbersome with the dual boot and also DP Alt not there yet and maybe never will, so I gave up on that.

So, I'll wait another year and will see then. I hope I don't get my company to buy me an M4 Max Ultra or so as that will ruin my desire to switch for 10 more years I guess.

> There is a level of polish

Yeah, those glossy mirror-like displays in which you see yourself much better than the displayed content are polished really well

Having used both types extensively my dell matte display diffuses the reflections so badly that you can’t see a damn thing. The one that replaced it was even worse.

I’ll take the apple display any day. It’s bright enough to blast through any reflections.

> "There is a level of polish that I haven’t seen on any x86 laptop, which makes it really hard for me to venture outside of Apple’s sandbox."

Hah, it's exactly the other way around for me; I can't stand Apple's hardware. But then again I never bought anything Asus... let alone gamer laptops.

What exactly is wrong with Apple hardware?

For me, the keyboards in the UK have an awful layout.

Not sure why they can follow ANSI in the US but not ISO here. I just have to override the layout and ignore the symbols.

I very much prefer penabled detachables, a much better form factor than the outdated classic laptop, with a focus on general-purpose computing, such as HP's ZBook x2 G4 detachable workstation. The ideal machine would be a second iteration of that design, just updated to be smaller as well as more performant and repairable. Of course that's not gonna happen, as there's, apart from legal issues, no money in it.

Apple on the other hand doesn't offer such machines... actually never has. To me, prizing maintainability, expandability, modularity, etc., their laptops are completely undesireable even within the confines of their outdated form factor; their efficient performance is largely irrelevant, and their tablets are much too enshittified to warrant consideration. And that's before we get into the OS and eco-system aspects. :)

Most manufacturers just don't give a shit. Had the exact same experience with a well-reviewed Acer laptop a while back, ended up getting rid of it a few months in because of constant annoyances, replaced with a MacBook Air that lasted for many years. A few years back, I got one of the popular Asus NUCs that came without networking drivers installed. I'm guessing those were on the CD that came with it, but not particularly helpful on a PC without a CD drive. The same SKU came with a variety of networking hardware from different manufacturers, without any indication of which combination I had, so trial and error it was. Zero chance non-techy people would get either working on their own.

My venture outside of MacBooks included a Dell XPS. Supposed to be their high end, and that year's model was well reviewed by multiple sources.... yet I returned it after like a week. The fan would not only run far too often but the sound it made was also atrocious. I have no clue if mine was defective or if all the reviewers are deaf to high frequencies. And the body was so flimsy that I would grab the corner of the laptop to move it and end up triggering a mouse click.

I had a 2020 Zephyrus G14 - also bought it largely because of the reviews.

First two years it was solid, but then weird stuff started happening like the integrated GPU running full throttle at all times and sleep mode meaning "high temperature and fans spinning to do exactly nothing" (that seems to be a Windows problem because my work machine does the same).

Meanwhile the manufacturer, having released a new model, lost interest, so no firmware updates to address those issues.

I currently have the Framework 16 and I'm happy with it, but I wouldn't recommend it by default.

I for one bought it because I tend to damage stuff like screens and ports and it also enables me to have unusual arrangements like a left-handed numpad - not exactly mainstream requirements.

I suspect the majority of people who recommend particular x86 laptops have only had x86 laptops. There’s a lot of disparity in quality between brands and models.

Apple is just off the side somewhere else.

I don't think there is a single thing you can point to. But overall Apple's hardware/software is highly optimized, closely knit, and each component is in general the best the industry has to offer. It is sold cheap as they make money on volume and an optimized supply chain.

Framework does not have the volume, it is optimized for modularity, and the software is not as optimized for the hardware.

As a general purpose computer Apple is impossible to beat and it will take a paradigm shift for that for to change (completely new platform - similar to the introduction of the smart phone). Framework has its place as a specialized device for people who enjoy flexible hardware and custom operating systems.

> It is sold cheap as they make money on volume and an optimized supply chain.

What about all the money that they make from abusive practices like refusing to integrate with competitors' products thus forcing you to buy their ecosystem, phoning home to run any app, high app store fees even on Mac OS, and their massive anti repair shenanigans?

Macs today are not designed to be easily repairably but instead to be lighter and otherwise better integrated - I believe that is consequence of consumer preferences and not shady business practices.

As for the services - it is a bit off topic as I believe Apple makes a profit on their macs alone ignoring their services business. But in general I have less of a problem with a subscription / fee-driven services business compared to an advertisement-based one. And as for the fee / alternative payment controversy (epic vs apple etc.) this is something that is relevant if you are a big brand that can actually market on your own / build an alternative shop infrastructure. For small time developers the marketing and payment infrastructure the apple app store offers is a bargain.

Macbooks are one of the heaviest laptops you can buy. I think they are doing it for the premium feel - it is extremely sturdy. I recently got some random lenovo YOGA for linux to go along side my macbook and it weighs less, is as thin and even has dedicated gpu - while having 2 user replaceable M.2 slots. It is also very sturdy but not as sturdy Macbooks.

What i am saying is that Apple could for sure fit replaceable drives without any change hit to size or weight. But their Mac strategy is price based on disk size and make repairs expensive so you buy new machine. I don't complain it is the reason why cheapest Macbook Air is the best laptop deal.

But let's stop this marketing story that it's their engineering genius not their market strategy.

  Macbooks are one of the heaviest laptops you can buy. I think they are doing it for the premium feel - it is extremely sturdy.
Yes, because of the metal enclosure while nearly all Windows laptop makers use plastic. Macs are usually the thinnest laptops in their class though.

My Asus is all metal, thinner, and lighter than my same-screen-size MacBook

It's also not as robust. But it's definitely thinner and lighter.

>Macbooks are one of the heaviest laptops you can buy.

I don't think this is even close to true. My last laptop from 2020 weighed at ~2.6kg and it's 2025 counterpart is still at 2.1kg, while my work m1 mac is at 1.3kg

>. I think they are doing it for the premium feel - it is extremely sturdy

It's not merely a feel; I've succesfully thrown it to the pavement more than once from ~1.5 meters and it's continued working well, whereas none of my previous laptops have gotten away scot free before from even one drop

Apple does practice very hard repairability which I agree should be made much more accessible.

I am pretty sure it is a consequence of consumer preference. I can see it from my own behaviour - I am a power user of all things computing and it has been decades since I upgraded a harddisk.

When one controls the OS and much of the delivery chain, it is not unthinkable to decide to through some billions of $$$ to create a chop optimized to serve exactly your needs.

So this is precisely what Apple did, and we can argue it was long time in the making. The funny part is that nobody expected x86 to make way for ARM chips, but perhaps this was a corporate bias stemming from Intel marketing, which they are arguably very good at.

> As a general purpose computer Apple is impossible to beat

Only if all you care about is having a laptop with really fast single core performance. Anything that requires real grunt needs a workstation or server which Apple silicon connot provide.

Plenty of excellent comments about the companies - e.g. Apple vertical closed mobile 1st, while Microsoft horizontal open desktop 1st; decades of work by many thousands of people went into optimising many tiny advantages, aka tricks - but can't help but think back of pre-history. Where Intel was always more-is-more, while ARM was always less-is-more. Intel was winning for the longest time. Never expected to see non-x86 competitive single core integer performance tbh. And in the pre-pre-history, one generation further back, tiny 6502 1MHz and mostly totally 8 bit only, could about keep up with Z80 4MHz and his almost-aspiring-to 16 bit registers. Always made me wonder somewhat - "whut, how come??"

That's a Chrome problem, especially on extra powerful processors like Strix Halo. Apple is very strict about power consumption in the development of Safari, but Chrome is designed to make use of all unallocated resources. This works great on a desktop computer, making it faster than Safari, but the difference isn't that significant and it results in a lot of power draw on mobile platforms. Many simple web sites will peg a CPU core even when not in focus, and it really adds up with multiple tabs open.

It's made worse on the Strix Halo platform, because it's a performance first design, so there's more resource for Chrome to take advantage of.

The closest browser to Safari that works on Linux is Falkon. It's compatability is even less than Safari, so there's a lot of sites where you can't use it, but on the ones where you can, your battery usage can be an order of magnitude less.

I recommend using Thorium instead of Chrome; it's better but it's still Chromium under the hood, so it doesn't save much power. I use it on pages that refuse to work on anything other than Chromium.

Chrome doesn't let you suspend tabs, and as far as I could find there aren't any plugins to do so; it just kills the process when there aren't enough resources and reloads the page when you return to it. Linux does have the ability to suspend processes, and you can save a lot of battery life, if you suspend Chrome when you aren't using it.

I don't know of any GUI for it, although most window managers make it easy to assign a keyboard shortcut to a command. Whenever you aren't using Chrome but don't want to deal with closing it and re-opening it, run the following command (and ignore the name, it doesn't kill the process):

    killall -STOP google-chrome
When you want to go back to using it, run:

    killall -CONT google-chrome
This works for any application, and the RAM usage will remain the same while suspended, but it won't draw power reading from or writing to RAM, and its CPU usage will drop to zero. The windows will remain open, and the window manager will handle them normally, but whats inside won't update, and clicks won't do anything until resumed.

AFAICT the comparisons to safari are no longer true

https://birchtree.me/blog/everyone-says-chrome-devastates-ma...

That might be different on other platforms

I can't vouch for Chrome and Safari themselves, but I can between Thorium and Falkon, because I regularly suspend Thorium and open the same page with Falkon, and watch the CPU usage graph drop from pegging a core to almost nothing.

I think the GP is talking about linux specifically. On a Mac I can see that Chrome disables unused tabs (mouse over says "Inactive tab, xxx MB freed up")

I have inactive tabs on Linux, and it shows the same thing.

Well, there is a major architectural reason why the entire M-series appears to be "so fast" and that is the unified memory, which completely eliminates the buffer-to-buffer data copying that is probably over half of what a non-unified memory architecture chip is doing at any given time. M-series chips have an architecture that completely eliminates data copying, just reference the data where it is, and you're done.

I really like the principles behind AMD's chiplet design, of course they've had different design goals behind it (easier diversification of their product portfolio), but it remains a fact that you can slap a not-so-terrible GPU right next to a CPU core.

There's probably a lot still missing: Apple integrated the memory on the same die, and built Metal for software to directly take advantage of that design. That's the competitive advantage of vertical integration.

> Apple integrated the memory on the same die

It's on the same package but not the same die

Is that what game consoles have done for years?

Apple made a big deal about this, but other iGPUs have done this for years.

It's not just the GPU memory, it's also I/O memory. That speeds up a lot: just update the pointer to where the memory is, no copying out of I/O memory.

I think this is partially down to Framework being a very small and new company that doesn't have the resources to make the best use of every last coulomb, rather than an inherent deficiency of x86. The larger companies like Asus and Lenovo are able to build more efficient laptops (at least under Windows), while Apple (having very few product SKUs and full vertical integration) can push things even further.

notebookcheck.com does pretty comprehensive battery and power efficiency testing - not of every single device, but they usually include a pretty good sample of the popular options.

Framework is a bit behind the others in terms of cooling, apparently due to compromises needed to achieve modularity. However, a well-tuned Ryzen U in the latest ThinkPads is not that far from M chips in terms of computing power per Watt according to some benchmarks.

Most Linux distributions are not well tuned, because this is too device-specific. Spending a few minutes writing custom udev rules, with the aid of powertop, can reduce heat and power usage dramatically. Another factor is Safari, which is significantly more efficient than Firefox and Chromium. To counter that, using a barebones setup with few running services can get you quite far. I can get more than 10 hours of battery from a recent ThinkPad.

> using a barebones setup with few running surfaces

The entire point here is that you can run whatever the hell you want on Apples stuff without breaking a sweat. I shouldn’t have to counter shit.

+1 on powertop, i have use it successfully for tunning old macs that I have upcycled with Linux and difference is day & night.

powertop helps a lot, I went from 3-4 hours to 6-7 hours on a ThinkPad. That said, it's not something you would want to bother a regular user with. E.g. enabling powertop optimizations will enable USB autosuspend, this will add a delay every darn time you didn't touch your USB keyboard or mouse for a second. So, you end up writing udev rules that excludes certain HID devices (or using different settings for when a laptop is on power or not), etc.

These are the kinds of optimizations that macOS does out of the box and you cannot expect most Linux users to do (which is one of the reasons battery life is so bad on Linux out-of-the-box).

I agree. The trick is to use powertop's suggestions to craft good udev rules, not to enable the powertop optimizations daemon directly. That doesn't work well in many scenarios. Someone should create a udev rule hardware database, or a udev rule generator for laptops and desktops to help common users.

One downside of Framework is they use DDR instead of LPDDR. This means you can upgrade or replace the RAM, but it also means memory is much slower and more power hungry.

Its also probably worth putting the laptop in "efficiency" mode (15W sustained, 25W boost per Framework). The difference in performance should be fairly negligible compared to balanced mode for most tasks and it will use less energy.

Hopefully Framework will move to https://en.wikipedia.org/wiki/CAMM_(memory_module) in the future. But it'd have to become something that's widely available and readily purchased.

However the latency of DDR is much better than LPDDR, so its pros and cons.

Isn't Ryzen AI (Strix Point?) using similar non-upgradeable LPDDR?

Framework does not have any design with those LPDDR packages.

https://frame.work/desktop?tab=specs

"LPDDR5x-8000"

On their desktop Ryzen AI Max, which uses kind of the same design as "Unified Memory" on Apple silicon. I think the comment you reply to refer to their laptops designs.

Ok, I was wrong. Didn't think of checking the desktop designs since it was a discussion on laptops.

They even decided to make me lie — twice, on the same day with their latest announcement: https://frame.work/ro/en/laptop16?tab=whats-new

I tend to think its putting the memory on the package. Putting the memory on the package has given the M1 over 400GB/s which is a good 4x that on a usual dual channel x64 CPU and the latency is half that of going out to a DRAM slot. That is drastic and I remember when the northbrige was first folded into the CPU by AMD with the Athlon and it had a similarly big improvements in performance. It also reduces power consumption a lot.

The cost is flexibility and I think for now they don't want to move to fixed RAM configurations. The X3D approach from AMD gets a good bunch of the benefits by just putting lots of cache on board.

Apple got a lot of performance out of not a lot of watts.

One other possibility on power saving is the way Apple ramps the clockspeed. Its quite slow to increase from its 1Ghz idle to 3.2Ghz, about 100ms and it doesn't even start for 40ms. With tiny little bursts of activity like web browsing and such this slow transition likely saves a lot of power at a cost of absolute responsiveness.

> and the latency is half that of going out to a DRAM slot.

No, it's not. DRAM latency on Apple Silicon is significantly higher than on the desktop, mainly because they use LPDDR which has higher latencies.

I was going to mention this as well.

Source: chipsandcheese.com memory latency graphs

Yes, this saves a lot of power and adds performance. But destroys your eco system and annoys a vocal user base. Apple has no eco system and lots of fans, so they are playing their cards right.

A small reason for less power consumption with on die RAM is that you don't need active termination, which does use a few watts of power. It isn't the main reason that the Macs use less power, though.

this slow transition likely saves a lot of power at a cost of absolute responsiveness.

Not necessarily. Running longer at a slower speed may consume more energy overall, which is why "race to sleep" is a thing. Ideally the clock would be completely stopped most of the time. I suspect it's just because Apple are more familiar with their own SoC design and have optimised the frequency control to work with their software.

Memory bandwidth is not what makes the CPU fast and efficiency. The CPU doesn’t even have access to the full Apple Silicon bandwidth.

On package memory increases efficiency, not speed.

However, most of the speed and efficiency advantages are in the design.

AMD needs to put out a reference motherboard to pair with their chips. They’re basically relying on third-party “manufacturers” to put up R&D. We have decades of these mobo manufacturers doing bare min churning out crappy quality mobos. No one’s interested in overclocking in 2025. Why am I paying $300 premium for a feature I don’t care about?

On my Framework (16), I've found that switching to GNOME's "Power Saver" mode strikes the right balance between thermals, battery usage and performance. I would recommend trying it. If you're not using GNOME, manually modifying `amd_pstate` and `amd_pstate_epp` (either via kernel boot parameters or runtime sysfs parameters) might help out.

I agree that it's unfortunate that the power usage isn't better tuned out of the box. An especially annoying aspect of GNOME's "Power Saver" mode is that it disables automatic software updates, so you can't have both automatic updates and efficient power usage at the same time (AFAIK)

On efficiency side, there's big difference on OS department. Recently released handheld Lenovo Go S has both SteamOS (which is Arch btw) and Windows11 versions, allowing to directly compare efficiency of a AMD's Z1E chip under load with limited TDP. And the difference is huge, with SteamOS fps is significantly higher and and the same time battery lasts a lot more.

Windows does a lot of useless crap in the background that kills battery and slows down user-launched software

There's a dimension to this people wilfully ignore: the AArch64 design is inspired, especially if you have a team as good as Apple have to execute an implementation of it. And that isn't a one way causality because AArch64 is what it is because of things that the Apple team wanted to do, which has led to their performance advantages today.

I don't think many people have appreciated just how big a change the 64 bit Arm was, to the point it's basically a completely different beast than what came before.

From the moment the iPhone went 64 bit it was clear this was the plan the whole time.

Like a few other comments have mentioned, AMD's Strix Halo / AI Max 380 and above is the chip family that is closest to what Apple has done with the M series. It has integrated memory and decent GPU. A few iterations of this should be comparable to the M series (and should make local LLMs very feasible, if that is your jam.)

On Cinebench 2025 single threaded, M4 is roughly 4x more efficient and 50% faster than Strix Halo. These numbers can be verified by googling Notebookcheck.

How many iterations to match Apple?

yes and no. i have macbook pro m4 and a zbook g1a (ai max 395+ ie strix halo)

In day to day usage the strix halo is significantly faster, and especially when large context LLM and games are used - but also typical stuff like Lightroom (gpu heavy) etc.

on the flip side the m4 battery life is significantly longer (but also the mpb is approx 1/4 heavier)

for what its worth i also have a t14 with a snapdragon X elite and while its battery is closer to a mbp, its just kinda slow and clunky.

so my best machine right now is the x86 actually!

  yes and no. i have macbook pro m4 and a zbook g1a (ai max 395+ ie strix halo)
You're comparing the base M4 to a full fat Strix Halo that costs nearly $4,000. You can buy the base M4 chip in a Mac Mini for $500 on sale. A better comparison would be the M4 Max at that price.

Here's a comparison I did between Strix Halo, M4 Pro, M4 Max: https://imgur.com/a/yvpEpKF

As you can see, Strix Halo is behind M4 Pro in performance and severely behind in efficiency. In ST, M4 Pro is 3.6x more efficient and 50% faster. It's not even close to the M4 Max.

  (but also the mpb is approx 1/4 heavier)
Because it uses a metal enclosure.

Someone has these two machines, and claims the x86 feels faster in his work.

You don't own any of the machines but have "made" a comparison by copying data from the internet I assume.

This is like explaining to someone who eats a sweet apple that the internet says the apple isn't sweet.

MacBook Pro, 2TB, 32gb, 3200 EUR

HP G1a, 2TB, 128gb, 3700 EUR

If we don't compare laptops but mini-PCs,

Evo X2, 2TB, 128gb, 2000 EUR,

Mac Mini, 2TB, 32gb, 2200 EUR

Their point is that they’re comparing between SoCs that aren’t in the same class, not that it’s not fast.

They’re not arguing against their subjective experience using it, they’re arguing against the comparison point as an objective metric.

If you’re picking analogies, it’s like saying Audis are faster than Mercedes but comparing an R8 against an A class.

1. Everyone is different, I don't care if a computer is worse on paper if it's better in real

2. I'd say apples and oranges is subjective and depends on what is important to you. If you're interested in Vitamin C, apples to oranges is a valid comparison. My interest in comparing this is for running local coding LLMs - and it is difficult to get great results on 24/32gb of Nvidia VRAM (but by far the fastest option/$ if your model fits into a 5090). For models to work with you often need 128gb of RAM, therefor I'd compare a Mac Studio 128gb (cheapest option from Apple for a 128gb RAM machine) with a 395+ (cheapest (only?) option for x86/Linux). So what is apples to oranges to you, makes sense to many other people.

3. Why would you think a 395+ and an M4 Pro are in "a different class"?

Let me start with your last point because it’s where you’ve misread the original comment and why none of your following arguments seem to make sense to onlookers.

They have a MacBook Pro with an M4, not an M4 Pro. That is a wildly different class of SoC from the 395. Unless the 395 is also capable of running in fanless devices too without issue.

For your first point, yes it does matter if the discussion is about objectively trying to understand why things are faster or not. Subjective opinions are fine, but they belong elsewhere. My grandma finds her Intel celeron fast enough for her work, I’m not getting into an argument with her over whether an i9 is faster for the same reason.

Your second point is equally as subjective, and out of place in a discussion about objectively trying to understand what makes the performance difference.

  You don't own any of the machines but have "made" a comparison by copying data from the internet I assume.

  This is like explaining to someone who eats a sweet apple that the internet says the apple isn't sweet.
Yea, I never said he is wrong in his own experience. I was pointing out that the comparison is made between a base M4 and maxed out Ryzen. If we want to compare products in the same class, then use M4 Max.

  MacBook Pro, 2TB, 32gb, 3200 EUR
A little disingenuous to max out on the SSD to make the Apple product look worse. SSD prices are bad value on Apple products. No one is denying that.

I didn't "max out" the SSD, I chose an SSD to match the machine of the user.

You: "You're comparing the base M4 to a full fat Strix Halo that costs nearly $4,000."

Then

You: "A little disingenuous to max out on the SSD to make the Apple product look worse."

  I didn't "max out" the SSD, I chose an SSD to match the machine of the user.
Why don't you try to match in CPU speed, GPU speed, NPU speed, noise, battery life, etc? Why match SSD only?

That's why your post was disingenuous.

If it helps you focus on what the actual discussion, we are comparing maximum CPU and GPU speeds for the dollar. That's it.

Evo X2, 128gb, 2000 EUR

Max Studio, 128gb, 4400 EUR

Great. Here's what you're getting between an M4 Max vs an AMD AI 395+: https://imgur.com/a/yvpEpKF

And of course, the Mac Studio itself is a much more capable box with things like Thunderbolt5, more ports, quieter, etc.

I can see why some people would choose the AMD solution. It runs x86, works well with Linux, can play DirectX games natively, and is much cheaper.

Meanwhile, the M4 Max performs significantly better, more efficient, likely much more quiet, runs macOS, more ports, better build quality, Apple backing and support.

AMD 395+/ Cachyos MT Geekbench 6 25334 - not sure where you get your Geekbench number from for the 395+

You: "If it helps you focus on what the actual discussion, we are comparing maximum CPU and GPU speeds for the dollar."

You: "Mac Studio itself is a much more capable box with things like Thunderbolt5, more ports, quieter"

You: AMD 395+/ Cachyos MT Geekbench 6 25334

Me: https://imgur.com/a/yvpEpKF

[deleted]
[deleted]

I also have a Strix Halo zbook G1A and I am quite disappointed in the idle power consumption as it hovers around 8W.

Adding to that, it is very picky about which power brick it accepts (not every 140W PD compliant works) and the one that comes with the laptop is bulky and heavy. I am used to plugging my laptop into whatever USB-C PD adapter is around, down to 20W phone chargers. Having the zbook refuse to charge on them is a big downgrade for me.

> Adding to that, it is very picky about which power brick it accepts (not every 140W PD compliant works) and the one that comes with the laptop is bulky and heavy. I am used to plugging my laptop into whatever USB-C PD adapter is around, down to 20W phone chargers. Having the zbook refuse to charge on them is a big downgrade for

It's Dell, they are probably not actually using PD3.1 to achieve the 140w mark, instead they are prolly using PD3.0 extension and shove 20v7a into the laptop. I can't find any info, but you can check on the charger.

If it lists 28V then it's 3.1, else 3.0. If it's 3.1 you can get a Baseus PowerMega 140W PD3.1, seems like a reeeeally solid charger from my limited use.

It is HP, and the output of the provided adapter does 28V 5A, so in spec.

With some of the other 28V 5A adapters I have, it charges until triggering a compute heavy task and then stops. I have seen reports online of people seeing this behavior with the official adapter. My theory is that the laptop itself does not accept any ripple at all.

Ah my bad. Are you sure your cable can do 140w? That was the source of most of my pains trying to push 100w to my work laptop. Baseus and Anker have some good PD3.1 chipped cables that worked for me. What kind of charger are you using?

I am also on search of good portable brick to replace 140w. I found 100w Anker Prime was working well. And surprisingly there is almost identical 3 port Baseus 100w GaN but half the price. For some reason it is hard to come by (they have few other 100w bricks that are not so portable) i think it might be discontinued.

The important part of this is 'Single threaded'. Whereas if you are actually using Cinebench to do real rendering you would always want multi core performance. Which pretty much makes Apples single Core benchmark results pointless.

> How many iterations to match Apple?

Why are you asking me? I'm not in charge of AMD.

Yes the Strix Halo is not as fast on the benchmarks as the M4 Max, its bandwidth is lower, and the max config has less memory. However, it is available in a lot of different configurations and some are much cheaper than comparable M4 systems (e.g. the maxed out Framework desktop is $2000.) It's a tradeoff, as everything in life is. No need to act like such an Apple fanboi.

  Why are you asking me? I'm not in charge of AMD.
Because you claimed this so I thought you knew:

  A few iterations of this should be comparable to the M series

On one of the few workloads where massive parallelism makes sense, why quote a single threaded number? I'm curious.

To show in real numbers why people always say a Macbook always feels miles ahead of AMD and Intel in actual real world experience.

The primary reason is the ST speed (snappy feeling) and the efficiency (no noise, cool, long battery life).

It just so happens that Cinebench 2025 is the only power measurement metric I have available via Notebookcheck. If Notebookcheck did power measurements for GB6, I'd rather use that as it's a better CPU benchmark overall.

Cinebench 2025 is a decent benchmark but not perfect. It does a good enough job of demonstrating why the experience of using Apple Silicon is so much better. If we truly want to measure the CPU architecture like a professional, we would use SPEC and the measure power from the wall.

>How many iterations to match Apple?

Until AMD can built a tailor made OS for their chips and build their own laptops.

Here's an M4 Max running macOS running Parallels running Windows compared to AMD's very best laptop chip:

https://browser.geekbench.com/v6/cpu/compare/13494385?baseli...

M4 Max is still faster. Note that the M4 Max is only given 14 out of 16 cores, likely reserving 2 of them for macOS.

How do you explain this when Windows has zero Apple Silicon optimizations?

Maybe Geek bench is not a good benchmark?

Maybe it is? Cinebench favors Apple even more.

GB correlates highly with SPEC. AMD also uses GB in their official marketing slides.

Geekbench is the closest thing to a good benchmark that's usable across generations and architectures.

> A few iterations of this should be comparable to the M series

This assumes Apple's M series performance is a static target. It is not. Apple is iterating too.

> It has integrated memory And has had for many years. Even Apple had that with Apple II

What point are you trying to make? Strix Halo was released this year. How is the architecture of the Apple II relevant?

A lot of insightful comments already, but there are two other tricks I think Apple is using: (1) the laptops can get really hot before the fans turn on audibly and (2) the fans are engineered to be super quiet. So even if they run on low RPM, you won't hear them. This makes the M-series seem even more efficient than they are.

Also, especially the MacBook Pros have really large batteries, on average larger than the competition. This increases the battery runtime.

The macbook air doesn't even have a fan. I don't think you could built a fan-less x86 laptop.

Sure you can. There are a bunch listed in this article: https://www.ultrabookreview.com/6520-fanless-ultrabooks/

Fanless x86 desktops are a thing too, in the form of thin clients and small PCs intended for business use. I have a few HP T630s I use as servers (I have used them as desktop PCs too, but my tab-hoarding habit makes them throttle a bit too much for my use - they'd be fine for a lot of people).

My experience with fanless Intel is that they tend to be rather sluggish for desktop GUI use, though. Which doesn't seem to be an issue with Macbook Air.

Do you have a version of that web page for people who want to run Linux? That'd be particularly helpful.

I've been experimenting with Asahi Linux recently on a spare M2 Air I have lying around, honestly very impressed. It's come on a lot since I last tried it a year or so ago

its x86, they all run linux. x86 (as in amd64) is standardized

There certainly have been issues with drivers. It'd be nice to know in advance if that's the case with any particular system.

> I don't think you could built a fan-less x86 laptop.

Sure you can, they’re readily available on the market, though not especially common.

But even performance laptops can often be run without spinning their fans up at all. Right now, the ambient temperature where I live is around 28°, and my four-year-old Ryzen 5800HS laptop hasn’t used its fan all day, though for a lot of that time it will have been helped by a ceiling fan. But even away from a fan for the last half hour, it sits in my lap only warm, not hot. It’s easy enough to give it a load it’ll need to spin the fan up for, but you can also limit it so it will never need its fan. (In summer when the ambient temperature is 10°C higher every day, you’ll want to use its fan even when idling, and it’ll be hard to convince it not to spin them up.)

x86-64 devices that are don’t even have fans won’t ever have such powerful CPUs, and historically have always been very underpowered. Like only 60% of my 5800HS’s single-threaded benchmarking and only 20% of its multithreaded. But at under 20% of the peak power consumption.

Sure, I have one sitting on my desk right now. It uses an Intel Core m3, and it's 7.5 years old, so it can't exactly be described as high performance, but it has a fantastic 3200x1800 screen and 8GB of RAM, and since I do all my number-crunching on remote servers it has been absolutely perfect. Unfortunately, the 7.5-year-old battery no longer lasts the whole day (it'll do more like 2 hours, or 1 hour running Zoom/Teams). It has a nice rigid all-metal construction and no fan. I'm looking around for a replacement but not finding much that makes sense.

It can consume almost 20W sustained, which is quite a lot. Competitors will definitely have fans roaring at this power draw. I think the all metal design makes a huge difference from a cooling perspective. The entire case is basically a heatsink.

You can, the thing is you have to build it out of a solid piece of metal. Either that's patented by Apple or it is too expensive for x86 system builders.

If I recall correctly Apple had to buy enormous numbers of CNC machines in order to build laptops that way. It was considered insane by the industry at the time.

Now it makes complete sense. Sort of like how crowbarring a computer into a laptop form factor was considered insane back in the early 90s.

Yup. The original article is gone, however there is the key excerpt in an old HN thread: https://news.ycombinator.com/item?id=24532257

Apple, unlike a lot, if not all large companies (who are run by MBA beancounter morons), holds insanely large amounts of cash. That is how they can go and buy up entire markets of vendors - CNC mills, TSMC's entire production capacity for a year or two, specialized drills, god knows what else.

They effectively price out all potential competitors at once for years at a time. Even if Microsoft or Samsung would want to compete with Apple and make their own full aluminium cases, LED microdots or whatever - they could not because Apple bought exclusivity rights to the machines necessary.

Of course, there's nothing stopping Microsoft or Samsung to do the same in theory... the problem these companies have is that building the war chest necessary would drag down their stonk price way too much.

For those like me who wanted to hunt down the linkrotted article:

https://web.archive.org/web/20201108182313/http://atomicdeli...

Some of the other big tech companies have or are able to have just as much, if not more cash, than Apple:

https://www.capitaladvisors.com/research/war-chest-exploring...

They just don’t want to bet they can deploy it successfully in the hardware market to compete with Apple, so they focus on other things (cloud services, ads, media, etc).

Google is not a hardware company (outside of the Pixel lineup where they just take some white-label ODM design).

Microsoft has a bit more hardware sales exposure from its consoles, but not for PCs. They don't have a need for revolutionary "it looks cool" stuff that Apple has.

Amazon, same thing. They brand their own products as the cheap baseline, again no need.

And Meta, all they do is VR stuff. And they did invest(ed?) tons of money into that.

The point is they have enough cash to make an attempt to be whatever company they want. Apple chose to delve into hardware, the others chose not to, not because they don’t have the cash.

I considered getting a personal MBP (I have an M3 from work), but picked up a Framework 13 with the AMD 7 7840U. I have Pop!_OS on it, and while it isn't quite as impressive as the MBP, it is radically better than other Windows / Linux laptops I have used lately, battery life is quite good, ~5hr or so, not quite on par with the MBP but still good enough that I don't really have any complaints (and being able to up upgrade RAM / SSD / even mobo is worth some tradeoff to me, where my employers will just throw my MBP away in a few years).

> "[...] battery life is quite good, ~5hr or so [...]"

You call five hours good?! Damn... For productivity use, I'd never buy anything below shift-endurance (eight hours or more).

Depends on what you do at work, 5 hours of continuous editing video is pretty good.

Highly dependent on workload, using my older work laptop with 100Wh battery it lasted maybe ~40min if you put some real work on it. Browsing the web or managing tickets on Jira is completely different

5 hours seems a lot worse than the ~10 hours I get on my M4 Air.

I get 8 to 10 hours of light use on my personal ThinkPad. Or ~6 h of Netflix at 50% screen brightness, despite the lack of hardware decoding for DRM encrypted video on Linux. All of these are with a max charge threshold of 80%. 5 hours of battery life sounds rather limited to me, too.

But then the numbers are hardly comparable without having comparable workloads. If I were regularly running builds or had some other moderate load throughout a working day, that'd probably cost a couple of hours.

I get like 3 hours on my MBP when I use it. MacBooks have better runtime only when they are mostly idle, not when you fully load them.

Can confirm, when developing software (a big project at $JOB) getting 3h out of a M3 MBP is a good day. IDE, build, test and crowdstrike are all quite power hungry.

I wonder how much of that is crowdstrike. At $LASTJOB my Mac was constantly chugging due to some mandated security software. Battery life on that computer was always horrible compared to a personal MB w/o it.

Exactly. Antiviruses are evil in this sense - crippling battery life significantly.

Wherever possible, I send “pkill -STOP” to all those processes, and stall them and thus save battery…

The firewall on that computer killed the battery (with repeated crashing). It also refused to work with a USB Ethernet adapter so I could only use wifi. It was clearly a product meant to check a security box, written by a company that knew nothing about Macs, bought by Enterprise Windows admins. It was incredibly frustrating. (The next version of MacOS moved firewalls away from in-kernel to extensions. I like to think it was my repeated crash logs that made the difference.)

I half wonder if that’s part of the issue with Windows PCs and their battery life. The OS requires so much extra monitoring just to protect itself that it ends up affecting performance and battery life significantly. It wouldn’t be surprising to me if this alone was the major performance boost Macs have over Windows laptops.

> crowdstrike

It is incredible that crowdstrike is still operating as a business.

It is also hard to understand why companies continue to deploy shoddy, malware-like "security" software that decreases reliability while increasing the attack surface.

Basically you need another laptop just to run the "security" software.

Allegedly, crowdstrike is S-tier EDR. Can’t blame security folks to want to have it. The performance and battery tax is very real though.

Ever since Crowdstrike fucked up and took out $10 billion worth of Windows PCs with a bad patch, most of the security folks I know have come around to the view that it is an overall liability. Something lighter-touch carries less risk, even if it isn't quite as effective.

there's a few different reasons: - its pushed by gov (it gives full access to machines, huge backdoor) - its not actually the worst of its kind, sadly - their threat database is good (ie it will catch stuff) - it lets you look at everything on the machine (not the only one, but, its def. useful) - its big - cant be faulted for "we had it and we got pwned" - yep, sad as well

If operating systems weren't as poop as they are today, this would not be necessary - but here we are. And I bet you major OS manufacturers will not really fix their OSes without ensuring its just a fully walled garden (terrible for devs.. but you'll probably just run a linux vm for dev on top..). Bad intents lead to bad software.

I concur.

The only portable M device I heavily used on the go was my iPad Pro.

That thing could survive for over a week if not or lightly used. But as soon as you open Lightroom to process photos, the battery would melt away in an hour or two.

At a certain point it's not like it matters. If you're working for 5 hours, let alone 10, you will almost certainly be able to plug in during that time.

It’s true for me. I need a portable workstation more than a mobile laptop, as long as it survives train travels (most have power outlets now), moving between buildings/rooms or the occasional meeting with a customer +presentation it is enough for me.

But I can imagine some people have different needs and may not have access to (enough) power outlets. Some meeting/conference rooms had only a handful outlets for dozens of people. Definitely nice to survive light office work for a full working day.

Curious if the suspend / hibernate "just works" when you close the lid?

I feel like I've tried several times to get this working in both Linux and Windows on various laptops and have never actually found a reliable solution (often resulting in having a hot and dead laptop in my backpack).

I have an intel framework running fedora. I have found that intels s0 sleep just uses way too much battery. I’d expect that in sleep mode, it should last a week and still be above 50% power but that is definitely not the case.

I ended up moving to hybrid, where it suspends for an hour allowing immediate wake up then hibernates completely. It’s a decent compromise and I’ve never once had an issue with resume from suspend or hibernate, nor have I ever had an issue with it randomly waking up and frying itself in a backpack or unexpectedly having a dead battery.

My work M1 is still superior in this regard but it is an acceptable compromise.

I learned that even tho I run Ubuntu, arch wiki has good info on proper commands to run to configure this behavior on my machine.

It does! The only thing wasn't working out of the box, so to speak, was the fingerprint reader, I had to do a little config to get it going.

If it makes you feel better, my work provided MBP has picked up this habit and is dead take the time i go to wake it up

Windows laptops are still worse, but i appreciate Apple continuing to give me reasons to hate them

I’m sure it’s great.

As a layman there’s no way I’m running something called “Pop!_OS” versus Mac OS.

How'd you get here - "as a layman"?

You're missing out. I've daily-driven both, modern macOS feels like a Fischer Price operating system by comparison.

Meh, it's kind of a silly name, sure, but it's one of the few distros backed by an actual vendor (System76) who isn't just trying to sucker you into buying something. As a result it has a nice level of polish and function.

I like macOS fine, I have been using Macs since 1984 (though things like SIP grate).

1. Memory soldered to the CPU

2. Much more cache

3. No legacy code

4. High frequencies (to be 1st in game benchmarks, see what happens when you're a little behind like the last Intel launch, the perception is Intel has bad CPUs because they are some percentage points behind AMD on games, pressure Apple doesn't have - comparisons are mostly Apple vs. Apple and Intel vs. Amd)

The engineers at AMD are the same as at Apple, but both markets demand different chips and they get different chips.

Since some time now the market is talking about energy efficiency, and we see

1. AMD soldering memory close to the CPU

2. Intel and AMD adding more cache

3. Talks about removing legacy instructions and bit widths

4. Lower out of the box frequencies

Will take more market pressure and more time though.

> When can we expect a x86 laptop chip to match the M1 in efficiency/thermals?!

One of the things Apple has done is to create a wider core that completes more instructions per clock cycle for performance while running those cores at conservative clock speeds for power efficiency.

Intel and AMD have been getting more performance by jacking up the clock speeds as high as possible. Doing so always comes at the cost of power draw and heat.

Intel's Lunar Lake has a reputation for much improved battery life, but also reduces the base clock speed to around 2 gigahertz.

The performance isn't great vs the massively overclocked versions, but at least you get decent battery life.

Apple tailors their software to run optimally on their hardware. Other OSs have to work on a variety of platforms. Therefore limiting the amount of hardware specific optimizations.

Well I don’t think so.

First, op is talking about Chrome which is not an Apple software. And I can testify that I observed the same behavior with other software which are really not optimized for macOS or even at all. Jetbrains IDEs are fast on M*.

Also, processor manufacturers are contributors of the Linux kernel and have economical interest in having Linux behave as fast as they can on their platforms if they want to sell them to datacenters.

I think it’s something else. Probably unified the memory ?

Chrome uses tons of APIs from MacOS, and all that code is very well optimized by Apple.

I remember disassembling Apple’s memcpy function on ARM64 and being amazed at how much customization they did just for that little function to be as efficient as possible for each length of a (small) memory buffer.

memcpy (and the other string routines) are some of the library functions that most benefit from heavy optimisation and tuning for specific CPUs -- they get hit a lot, and careful adjustment of the code can get major performance wins by ensuring that the full memory bandwidth of the CPU is being used (which may involve using specific load instructions, deciding whether using the simd registers is better or not, and so on). So everybody who cares about performance optimises these routines pretty carefully, regardless of toolchain/OS. For instance the glibc versions are here:

https://github.com/bminor/glibc/tree/master/sysdeps/aarch64/...

and there are five versions specialised for either specific CPU models or for available architecture features.

This argument never passes the sniff test.

You can run Linux on a MacBook Pro and get similar power efficiency.

Or run third party apps on macOS and similarly get good efficiency.

unfortunately, contrarily to popular belief, you cannot run Linux natively on recent macbooks (m4) today.

That doesn’t really affect what I’m saying though. Yes, support capped out with the M2, but you can still observe the properties of efficiency on there.

Depends what "natively" means. You can virtualize Linux through several means such as Virtual Box.

...but you won't get similar power efficiency, which was claimed.

You can run Linux on a MacBook Pro and get similar power efficiency.

What? No. Asahi is spectacular for what it accomplished, but battery life is still far worse than macOS.

I am not saying that it is only software. It's everything from hardware to a gazillion optimizations in macOS.

It’s worse at switching power states, but at a given power state it is within the ball park of macOS power use.

The things where it lags are anything that use hardware acceleration or proper lowering to the lower power states.

The fastest and most efficiency Windows laptop in the world is an M4 MacBook running Parallels.

How does it compare with VMWare? I’d rather not use Parallels…

edit: whoever downvoted - please explain, what's wrong with preferring VMWare? also, for me, historically (2007-2012), it's been more performant, but didn't use it lately.

Looks about the same between Parallels and VMWare: https://browser.geekbench.com/v6/cpu/compare/13494570?baseli...

Also, here's proof that M4 Max running Parallels is the fastest Windows laptop: https://browser.geekbench.com/v6/cpu/compare/13494385?baseli...

M4 Max is running macOS running Parallels running Windows and is only using 14 out of 16 possible cores and it's still faster than AMD's very best laptop chip.

No, it's not, it's absolutely the hardware. The vertical integration surely doesn't hurt, but third-party software runs very fast and efficient on M-series too, including Asahi Linux.

Does Asahi Linux now run efficiently? I tried it on M1 about two years ago. Battery life was maybe 30% of what you get on macOS.

For what it's worth -- and I'm not familiar with the Framework 13 -- but I did recently review a marketed-for-AI-workloads laptop with Ryzen 260 CPU and Nvidia 5060 laptop GPU, which shipped with Windows, and was curious how graphical Ubuntu with GNOME would run from a fresh install on it. It ran hot on simple tasks, with severely worse battery performance (from 11h runtime playing a local video stream via Firefox to 3.5h) and moderately worse total work output relative to Windows.

It runs Debian headless now (I didn't have particular use for a laptop in the first place). Not sure just how unpopular this suggestion'd be, but I'd try booting Windows on the laptop to get an idea of how it's supposed to run.

What is the power profile setting? Is it on balanced or performance? Install powertop and see what is up. What distro are you using? The linux drivers for the new AMD chips might stink cause the chips are so new. Linux drivers for laptops stink in general compared to Windows. I know my 11th gen WiFi still doesn't work right, even with latest kernel and disabling powersaving on the wifi.

I may be out of date or wrong, but I recall when the M1 came out there was some claims that x86 could never catch up, because there is an instruction decoding bottleneck (instructions are all variable size), which the M1 does not have, or can do in parallel. Because of that bottleneck x86 needs to use other tricks to get speed and those run hot.

ARM instructions are fixed size, while x86 are variable. This makes a wide decoder fairly trivial for ARM, while it is complex and difficult for x86.

However, this doesn't really hold up as the cause for the difference. The Zen4/5 chips, for example, source the vast majority of their instructions out of their uOp trace cache, where the instructions have already been decoded. This also saves power - even on ARM, decoders take power.

People have been trying to figure out the "secret sauce" since the M chips have been introduced. In my opinion, it's a combination of:

1) The apple engineers did a superb job creating a well balanced architecture

2) Being close to their memory subsystem with lots of bandwidth and deep buffers so they can use it is great. For example, my old M2 Pro macbook has more than twice the memory bandwidth than the current best desktop CPU, the zen5 9950x. That's absurd, but here we are...

3) AMD and Intel heavily bias on the costly side of the watts vs performance curve. Even the compact zen cores are optimized more for area than wattage. I'm curious what a true low power zen core (akin to the apple e cores) would do.

But is the uOp trace cache free? It surely doesn’t magically decode and put stuff in there without cost

For sure.. for what it's worth though, I have run across several references to arm also implementing uop caches as a power optimization versus just running the decoders, so I'm inclined to say that whatever it's cost it pays for itself. I am not a chip designer though!

Apple never used a uop cache in their designs. ARM dropped uop caches when they removed 32-bit support. Qualcomm also skipped uop cache.

uop made sense with 32-bit support because the 32-bit ISA was so complex (though still simple compared to x86). Once they went to a simplified instruction design, the cost to decode every single time was lower than the cost of maintaining the uop cache.

When limited to 5 watts, the Ryzen HX 370 works pretty darn well. In some low-power user cases, my GPD Pocket 4 is more power efficient than my M3 MBA.

We are going to need to see some numbers for your claim. That’s not believable.

A 8.8" screen takes a lot less power.

When you say efficiency, I assume you’re factoring in performance of the device as well?

Maybe run Geekbench 6 and see.

I am not the original commenter; but they said "low-power user cases" i.e. very much not when running Geekbench; rather when it is near idle.

FYI, AMD chips are notoriously bad at idle.

We will need some citations on that as the GPD Pocket 4 isn't even the most power efficient pocket pc.

Closest I've seen is an uncited Reddit thread talking about usb c charging draw when running a task, conflating it with power usage.

How about single-core performance?

Zens don't have a trace cache, just an uop cache.

They can always catch up, it may just take a while. x86's variable size instructions have performance advantages because they fit in cache better.

ARM has better /security/ though - not only does it have more modern features but eg variable length instructions also mean you can reinterpret them by jumping into the middle of one.

No one ever said that. The M1 was not the fastest laptop when it was introduced. It was a nice balance of speed/battery life/heat

Plenty of reasons, but the big one would be integration, especially RAM. Apple M series processors are exclusively designed for Apple products running the Apple OS, none of them extensible. It means it can be optimized for that use case.

RAM in particular can be a big performance bottleneck, Apple M as way better bandwidth than most x86 CPUs, having well specified RAM chips soldered right next to the CPU instead of having to support DIMM modules certainty helps. AMD AI MAX chips, which also have great memory bandwidth and the most comparable to Apple M also use soldered RAM.

Maybe some details like ARM having a more efficient instruction decoder plays a part, but I don't believe it is that significant.

There are really a lot of responses to this which explain it well. The summary though might be phrased as 'alignment'. Specifically, when everyone from the mainboard engineer to the product marketeer for the product have the same goals and priorities (are aligned) the overall system reflects that. In x86 land the processor guys are always trying to 'capture more addressable market' which means features for specific things which perhaps have no value to your 'laptop' but are great for cars embedding the chip. Similarly for display manufacturers who want standards that work for everyone even if they aren't precisely what everyone wants. Need a special 'sleep the pixels that are turned off' mode for your screen ASIC which isn't part of the HDMI spec? Nah we're not gonna do that because who would use it? But Apple can, specific things in the screen that minimize power that the OS can talk to through 'side channels' that aren't part of any standard? Sure they can do that too. And if everyone is aligned on long battery life (for example) that happens. I worked at both Google and Netapp and both of them bought enough hard drives that they could demand and get specific drive firmware that did things to make their systems run better. Their software knew about the specific firmware and exploited it. They 'aligned' their vendors with their system objectives which they could do because of their volume purchases.

In the x86 laptop space the 'big' vendors like Dell, HP, Asus, Lenovo, Etc. Can do that sort of thing. Framework doesn't have the leverage yet. Linux is an issue too because that community isn't aligned either.

Alignment is facilitated by mutual self interest, vendors align because they want your business, etc. The x86 laptop industry has a very wide set of customer requirements, which is also challenging (need lots of different kinds of laptops for different needs).

The experience is especially acute when one's requirements for a piece of equipment have strayed from the 'mass market' needs so the products offered are less and less aligned with your needs. I feel this acutely as laptops move from being a programming tool to being an applications product delivery tool.

It sounds like something is horribly misconfigured.

- Try running powertop to see if it says what the issue is.

- Switch to firefox to rule out chrome misconfigurations.

- If this is wayland, try x11

I have an amd SOC desktop and it doesn’t spin up the fans or get warm unless its running a recent AAA title or an LLM. (I’m running devuan because most other distros I’ve tried aren’t stable enough these days).

In scatterplots of performance vs wattage, AMD and Apple silicon are on the same curve. Apple owns the low end and AMD owns the high end. There’s plenty of overlap in the middle.

If the framework 12 could have the 395+ but I think it cannot work out vs arm? And then my m4 air is just better and cheaper. Cheaper I don't care about much but battery vs perf is quite mental.

There's a lot of trash talking of x86 here but I feel like it's not x86 or Intel/AMD that are the problem for the simple reason that Chromebooks exist. If you've ever used a Chromebook with the Linux VM turned on, they can basically run everything you can run in Linux, don't get hot unless you actually run something demanding, have very good idle power usage, and actually sleep properly. All this while running on the same i5 that would overheat and fail to sleep in Windows / default Linux distros. This means that it is very much possible to have an x86 get similar runtimes and heat output as an M Series Mac, you just need two things:

- A properly written firmware. All Chromebooks are required to use Coreboot and have very strict requirements on the quality of the implementation set by Google. Windows laptops don't have that and very often have very annoying firmware problems, even in the best cases like Thinkpads and Frameworks. Even on samples from those good brands, just the s0ix self-tester has personally given me glaring failures in basic firmware capabilities.

- A properly tuned kernel and OS. ChromeOS is Gentoo under the hood and every core service is afaik recompiled for the CPU architecture with as many optimisations enabled. I'm pretty sure that the kernel is also tweaked for battery life and desktop usage. Default installations of popular distros will struggle to support this because they come pre-compiled and they need to support devices other than ultrabooks.

Unfortunately, it seems like Google is abandoning the project altogether, seeing as they're dropping Steam support and merging ChromeOS into Android. I wish they'd instead make another Pixelbook, work with Adobe and other professional software companies to make their software compatible with Proton + Wine, and we'd have a real competitor to the M1 Macbook Air, which nothing outside of Apple can match still.

In the general case, it appears to be impossible to beat a hardware vendor that is also entirely in charge of the operating system and much of the software on top of that (e.g. safari).

In special cases, such as not caring about battery life, x86 can run circles around M1. If you allow the CPU rated for 400W to actually consume that amount of power, it's going to annihilate the one that sips down 35W. For many workloads it is absolutely worth it to pay for these diminishing returns.

To those who are using the newer MacBook pros, how easy and seamless it is to run Linux on it via Parallels etc without going all the way to Asahi etc? Like if i'm super comfortable with Linux, can I just get near native Linux desktop experience and forget that all of it is running on top of MacOS?

Parallels is quite good - I can watch 4K YouTube videos at 60fps with no noticeable frame drops on an M1 Pro, and general desktop animations, etc. are fine. That said, I do occasionally get rendering glitches, usually in Firefox where a small rectangular portion of the screen will briefly flash black while scrolling quickly through a page.

The biggest quality of life issue for me personally is the trackpad. Although support for gestures and so on has gotten quite decent in Linux land, Parallels only sends the VM scroll wheel events, so there's no way to have smooth scrolling and swipe gestures inside the VM, so it feels much worse than native macOS or Asahi Linux running on the bare metal.

It's pretty seamless, but you can't really get the macOS UI out of the picture entirely. You can run it fullscreen, sure, but even then there are still some shortcuts that are going to be handled by macOS, and also multiple displays etc.

OTOH if you're fine with macOS GUI but you want something like WSL for CLI and server apps, there's https://lima-vm.io

Why bother with that? Macos is a unix OS already.

A friend at grad school was asking me for advice -- he had an "in" at Intel -- make $250k and do nothing. A friend had promised a basically no-show position for him. My friend was debating between this $250k/yr no-show position at Intel (no growth) vs something elsewhere which was more demanding but would provide more growth.

This isnt the only no-show position I've heard about at Intel. That is why Intel cannot catch up. You probably cannot get away with that at Apple.

How could Intel possibly benefit from that?

Could be some manager who needs to show a certain headcount to maintain their status, but involving more people in the actual work would just be too many cooks in the kitchen. The friend probably had a degree that looked sufficiently good on paper to make the manager look good.

A better question is which (if any) ARM competitors can achieve comparable performance to M-series? I do understand Apple has tuned the entire platform from cpu/gpu, cache, unified memory, and software to achieve what they offer.

I think the challenge is going to be software, software tuning, and (until everyone builds for both ARM and x64) - translation/emulation. I’ll admit that I haven’t had much experience on the Windows side but I made the leap pretty quickly from the early 2015 MBP to an M1 MBA (like maybe a month after the M1 Macs came out) and it very much was seamless, whereas it still sounds like on the Windows on ARM side it’s been languishing even to this day.

I don't give any fucks about battery life or even total power consumption cost; I just hate that I have some crap-ass Apple mid-range (for them) laptop with only 36GB RAM and an "M4 Max" CPU, and it runs rings around my 350W Core i9-14900K desktop Linux workstation, and there is essentially no way I can develop software (Rust, web apps, multi-container Docker crap) on Linux with anything close to the performance of my shitty laptop computer, even if I spend $10,000.

That's actually wild. I think we're in a kind of unique moment, but one that is good for Apple mainly, because their OS is so developer-hostile that I pay back all the performance gains with interest. T_T

To be honest, I haven't done any research on this, but it's something that crosses my mind from time to time. My laptop has 32 GB of RAM and an i7-14700H processor, with Linux Mint installed. I'm more than happy with its performance, especially considering I bought it for a price that was very cheap for the market.

I wonder what specs a MacBook would need to give me similar performance. For example, on Linux with 32 GB of RAM, I can sometimes have 4 or 5 instances of WebStorm open and forget about them running in the background. Could a MacBook with 16 GB of RAM handle that? Similarly, which MacBook processor would give me the real-world, daily-use performance I get from my 14700H? Should I continue using cheap and powerful Windows/Linux laptops in the future, or should I make the switch to a MacBook?

(Translated from my native language to English using Gemini.)

I don't know for sure, either, but I suspect any recent Macbook with 16GB RAM would be a significant upgrade over 14700H.

I don't like macOS, so in recent years, I only use it on laptop (which for me is like, a few on-site meetings per year, plus a few airplane flights). What infuriates me is that my mid-tier Mac laptop for those use cases is now significantly faster than any Linux workstation I can possibly buy... and positively annihilates any non-Apple laptop machine on essentially every meaningful benchmark.

I really hoped that Asahi Linux had progressed. I want to use Linux on apple hardware.

I love how few people mention ARM being used in the cloud when it has literally saved folks so much money not to mention the planet burns less quickly on ARM.

I was in nearly the same situation as you and went with the Framework 13 as well (albeit with the AMD Ryzen 5 7640U which is an older chip). Not really regretting it though despite some quirks. Out of curiosity, how much RAM do you have in your Framework 13?

No incentive. x86 users come to the table with a heatsink in one hand and a fan in the other, ready to consume some watts.

from pure CPU and battery life perspective, Snapdragon X Elite based Surface Laptop 7 are really quite good -- comparable to M2 Pro and M3 Pro performance and performance per watt. GPU is a bit weak.

the build quality of surface laptop is superb also.

The M4 I have lets me run GPT-OSS-20B on my Mac, and its surprisingly responsive. I was able to get LM Studio to even run a web API for it, which Zed detected. I'm pleasantly surprised by how powerful it is. My gaming with with a 3080 cannot even run the same LLM model (not enough VRAM).

Backward compatibility.

Intel provides processors for many vendors and many OS. Changing to a new architecture is almost impossible to coordinate. Apple doesn't have this problem.

Actually in de 90s Intel and Microsoft wanted to move to a RISC architecture but Compaq forced them to stay on x86.

Apple: m68k -> PowerPC (32), OS 9 -> OS X, PowerPC (32, 64) -> x86 (32, 64) -> Arm. They've dragged giants like Adobe (kicking and screaming) thru most stages.

Windows NT has always been portable, but didn't provide any serious compat with Windows 4.x until 5.0. At that time, AMD released their 64-bit extension to x86. Intel wanted to build their own, Microsoft went "haha no". By that time they've been dictating the CPU architecture.

I guess at that point there was very little reason to switch. Intel's Core happened; Apple even went to Intel to ask for a CPU for what would become the iPhone - but Intel wasn't interested.

Perhaps I'm oversimplifying, but I think it's complacency. Apple remained agile.

I was about to say it might be windows and use linux, since perf benchmarks on windows can be far worse for the same chip than linux, but you are using linux already.

Intel and AMD have to earn their investments back in one generation, Apple can earn their investments back over a customers lifetime.

> using the Framework feels like using an older Intel based Mac

Your memory served you wrong. Experience eith Intel based Macs was much worse than recent AMD chips.

Agree. My 2017 MBP cooked its own battery (spicy pillow) by 2021.

My 2019 Thinkpad T495 (Ryzen 3600) does get hot under load, but it's still fine to type on.

Yep, but only because of Apples terrible design. Take those same chips and put them in a machine with proper cooling and they fly. Its frustrating when Apple fans always blame that situation on intel, when in reality Apple messed up the design bad. Its almost like they purposely designed the last generation of intel macs to run hot and throttle just so people had bad memories of them after upgrading to Apple silicon.

It's not just the hardware efficiency, but it's also the software stack that's efficient. I'd be curious, macOS versus Linux for battery life testing.

I think it is getting close: [0]

(Edit, I read lower in the thread that the software platform also needs to know how to make efficient use of this performance per watt, ie, by not taking all the watts you can get.)

[0] https://www.phoronix.com/review/ryzen-ai-max-395-9950x-9950x...

I have an iPad pro (m1) and don't feel like upgrading at all. Of course, it's an overpowered chip for a tablet - but I'm still impressed by what I can run on it (like DrawThings).

You can probably install Asahi Linux on that M1 pro and do comparative benchmarks. Does it still feel different? (serious question)

I can build myself a new amd64 box for just under €200. Under €100 with used parts. Some older Dell and Lenovo laptops even work with coreboot.

An Airbook sets me back €1000, enough to buy a used car, and AFAICT is much more difficult to get fully working Linux on than my €200 amd64 build.

Why hasn't apple caught up?

When netbooks ($400 notebooks) were all the rage, Steve Jobs was asked why Apple didn’t make one. And he said they didn’t know how to make a cheap laptop that didn’t suck.

And he was right. Netbooks mostly sucked. Same with Chromebooks.

There’s nothing to be gained by racing to the bottom.

You can buy an m1 laptop for $599 at Walmart. That’s an amazing deal.

    > You can buy ... for $599
Not sure why you'd think any random nerd has that kind of money. And Walmart isn't exactly around the corner for most parts of the world.

if you're going to include used, you can get an M1 for as low as $300. https://www.backmarket.com/en-us/p/macbook-air-2020-13-inch-...

>I can build myself a new amd64 box for just under €200

Precisely because of that they haven't caught up. They don't want to compete in the PC race to them bottom that nearly bankrupted them in the 90s before they invented the iPod.

Apple got rich by creating its own markets.

[dead]

Anda bisa membatalkan pinjaman KrediVo hubungi call center Kredivo di nmor +62815_4034_985 menjelaskan keinginan Anda untuk membatalkan pinjaman dan ikuti arah verifikasi data dari layanan pelanggan

Anda bisa membatalkan pinjaman KrediVo hubungi call center Kredivo di nmor +62815_4034_985 menjelaskan keinginan Anda untuk membatalkan pinjaman dan ikuti arah verifikasi data dari layanan pelanggan

> I haven’t tried Windows on the Framework yet it might be my Linux setup being inefficient.

My experience has been to the contrary. Moving to Linux a couple months ago from Windows doubled my battery life and killed almost all the fan noise.

Honestly, I have serious FOMO about this. I am never going to run a Mac (or worse: Windows) I'm 100% on Linux, but I seriously hate it that I can't reliably work at a coffee shop for five hours. Not even doing that much other than some music, coding, and a few compiles of golang code.

My Apple friends get 12+ hrs of battery life. I really wish Lenovo+Fedora or whoever would get together and make that possible.

I have a 7.5 year old Asus Zenbook UX305CA. It was the perfect laptop for my use case, given I run all heavy stuff on remote servers. 3200x1800 HiDPI screen, 8GB RAM, no fan, rigid aluminium construction (so it feels high quality), and it runs Linux pretty reliably. It used to get at least 6-7 hours of doing actual work, and one night I forgot to hibernate it or plug it in, and it was still running the next morning.

Now, 7.5 years later, the battery is not so healthy any more, and I'm looking around for something similar, and finding nothing. I'm seriously considering just replacing the battery. I'll be stuck with only 8GB RAM and an ancient CPU, but it still looks like the best option.

Another useful thing is that you can buy small portable battery packs that are meant for jump-starting car engines, and they have a 12V output (probably more like 14V), which could quite possibly be piped straight into the DC input of a laptop. My laptop asks for 19V, but it could probably cope with this.

> work at a coffee shop

That doesn't sound super secure to me.

> for five hours.

My experience with anything that is not designed to be an office is that it will be uncomfortable in the long run. I can't see myself working for 5 hours in that kind of place.

Also it seems it is quite easily solved with an external battery pack. They may not last 12hours but they should last 4 to 6 hours without a charge in powersaving mode.

Despite OP's complaints (which are valid) I run Fedora on my Framework 13 (AMD) and I get 5 hours of work (10 ish Firefox tabs, multiple VS Code instances, terminals and Slack) without issue.

It's not 8-12, and the fans do kick up. The track pad is fine but not as nice as the one on the MacBook. But I prefer to run Linux so the tradeoff is worth it to me.

> I'm 100% on Linux, but I seriously hate it that I can't reliably work at a coffee shop for five hours. Not even doing that much other than some music, coding, and a few compiles of golang code.

Don't you drink any coffee in the coffee shop? I hope you do. But, still, being there for /five/ hours is excessive.

> I am never going to run a Mac (or worse: Windows) I'm 100% on Linux,

I'm guessing you're well aware, but just in case you're not: Asahi Linux is working extremely well on M1/M2 devices and easily covers your "5 hours of work at a coffee shop" use case.

try one of the newer amd or intel (TSMC-made) CPUs. its pretty much the same. keep in mind the battery size too. mbp has a huge and very heavy battery (the mbp is super heavy)

HP has Ubuntu-certified strix halo machines for example.

> I seriously hate it that I can't reliably work at a coffee shop for five hours

just... take your charger...

They’re relatively heavy, take up space and there’s no guarantee there will be an outlet near your table. When connected, the laptop becomes more difficult to move or pack. It’s all doable but also slightly less convenient.

Thanks for the honest review! I have two Intel ThinkPads (2018 and 2020) and I've been eying the Framework laptops for a few years as a potential replacement. It seems they do keep getting better, but I might just wait another year. When will x86 have the "alien technology from the future" moment that M1 users had years ago already?

In general, probably co-design with software. Apple is in a position where they design microprocessors that are only going to be running MacOS/iOS.

Macbooks are more like "phone/tablet hardware evolved into desktop" mindset (low power, high performance). x86 hardware is the other way around (high power, we'll see about performance).

That being said, my M2 beats the ... out of my twice as expensive work laptop when compiling an arduino project. Literall jaw drop the first time I compiled on the M2.

I don't think a fan spinning is negative. The cooling is functioning effectively.

Apple often lets the device throttle before it turns on the fans for "better ux" linux plays no such mind games.

The way the notebooks are built allow for passive cooling, fans are actually quieter and the CPUs run colder at same workloads as proven by cinebench per watt test. It's not just a simple thing.

x86 has long been the industry standard and can’t be remove, but Apple could move away from it because they control both hardware and software.

Software.

If you actually benchmark said chips in a computational workload I'd imagine the newer chip should handily beat the old M1.

I find both windows and Linux have questionable power management by default.

On top of that, snappiness/responsiveness has very little to do with the processor and everything to do with the software sitting on top of it.

Cinebench points per Watt according to a recent c't CPU comparison [1]:

  Apple M1: 23.3
  Apple M4: 28.8
  Ryzen 9 7950X3D (from 2023, best x86): 10.6
All other x86 were less efficient.

The Apple CPUs also beat most of the respective same-year x86 CPUs in Cinebench single-thread performance.

[1] https://www.heise.de/tests/Ueber-50-Desktop-CPUs-im-Performa... (paywalled, an older version is at https://www.heise.de/select/ct/2023/14/2307513222218136903#&...)

Does the M series have a flat memory model? If so, I believe that may be the difference. I'm pretty sure the entire x86 family still pages RAM access which (at least) quadruples activity on the various busses and thus generates far more heat and uses more energy.

I'm not aware of any CPU invented since the late eighties that doesn't have paged virtual memory. Am I misunderstanding what you mean? Can you expand on where you are getting the 4x number from?

I doubt any CPU has more levels of address translation, caching, and other layers of memory access indirection than AMD/Intel 64 at this point.

That's an interesting question about the number of levels of address translation. Does anyone have numbers for that, and how much latency and energy an extra layer costs?

How much do you like the rest of the hardware? What price would seem OK for decent GUI software that runs for a long time on batter?

Am learning x86 in order to build nice software for the Framework 12 i3 13-1315U (raptor lake). Going into the optimization manuals for intel's E-cores (apparently Atom) and AMD's 5c cores. The efficiency cores on the M1 MacBook Pro are awesome. Getting debian or Ubuntu with KDE to run this on a FW12 will be mind-boggling.

I always thought it's Apple's on-package DRAM latency that contributes to its speed relative to x86 especially for local LLM (generative but not necessarily training) usage but with the answers here I'm not so sure.

> a number of Dockers containers running simultaneously and I never hear the fans, battery life has taken a bit of a hit but it is still very respectable.

Note those docker containers are running in a linux VM!

Of course they are on Windows (WSL2) as well.

Docker has got to be one of the worst energy consumption offenders given it's running on a heavy VM under a non-Linux OS for developers most of the time when people think it's lightweight. Doesn't help on the high performance side either, esp. ML. Might just be the wrong abstraction, and driven by "cloud vendors" for conveniently (for them!) farming overcomitted servers with ill-partitioned mostly-idle vibe "microservices."

They are pretty similar when comparing the latest amd, and Apple chips on the same node. The buying power from Apple means that they get them earlier than AMD, usually by 6-9 months.

Windows on the other hand is horribly optimized, not only for performance, but also for battery life. You see some better results from Linux, but again it takes a while for all of the optimizations to trickle down.

The tight optimization between the chip, operating system, and targeted compilation all come together to make a tightly integrated product. However comparing raw compute, and efficiency, the AMD products tend to match the capacity of any given node.

In my opinion AMD is on a good way having at least comparable performance to MacBooks copying Apples architectural decisions. Unfortunately their jump on the latest AI Hype Train did not suit them well for efficiency. Ryzen 7840U was significantly more efficient than Ryzen AI 7 350 [1]

However, with AMD Strix Halo aka AMD Ryzen AI Max+ 395 (PRO) there are Notebooks like the ZBook Ultra G1a and Tablets like the Asus ROG Flow Z13, that come close to the MacBook power / performance ratio[2] due to the fact, that they used high bandwidth soldered on memory, which allows for GPUs with shared VRAM similar to Apple's strategy.

Framework did not manage to put this thing in notebook yet, but shipped a Desktop variant. They also pointed out, that there was no way to use LPCAMM2 or any other modular RAM tech with that machine, because it would have slowed it down / increased latencies to an unusable state.

So I'm pretty sure the main reason for Apple's success is the deeply integrated architecture and I'm hopeful that AMD's next generation STRIX Halo APUs might provide this with higher efficiency and hopefully Framework adapts these chips in their notebooks. Maybe they just did in the 16?! Let's wait for this announcement: https://www.youtube.com/watch?v=OZRG7Og61mw

Regarding the deeply thought through integration there is a story I often tell: Apple used to make iPods. These had support for audio playback control with their headphone remotes (e.g. EarPods), which are still available today. These had a proprietary ultra sonic chirp protocol[3] to identify Apple devices and supported volume control and complex playback control actions. You could even navigate through menus via voiceover with longpress and then using the volume buttons to navigate. Until today with their USB-C-to-AudioJack Adapters these still work on nearly every apple device published after 2013 and the wireless earbuds also support parts of this. Android has tried to copy this tiny little engineering wonder, but until today they did not manage to get it working[4]. They instead focus on their proprietary "longpress" should work in our favour and start "hey google" thing, which is ridiculously hard to intercept / override in officially published Android apps... what a shame ;)

1: https://youtu.be/51W0eq7-xrY?t=773

2: https://youtu.be/oyrAur5yYrA

3: https://tinymicros.com/wiki/Apple_iPod_Remote_Protocol

4: https://github.com/androidx/media/issues/2637

AMD’s Strix Halo is still significantly far behind M4 in performance and efficiency. Not even close n

There is an M-series competitor from Intel that was released last year, codename Lunar Lake.

Here's a video about it. Skip to 4:55 for battery life benchmarks. https://www.youtube.com/watch?v=ymoiWv9BF7Q

Chrome has been very conservative about enabling hardware acceleration features on Linux. Look under about://gpu to see a list. It is possible to force them via command line flags. That said, this is only part of the story.

There are different kinds of transistors that can be used when making chips. There are slow, but efficient transistors and fast, but leaky transistors. Getting an efficient design is a balancing act where you limit use of the fast transistors to only the most performance critical areas. AMD historically has more liberally used these high performance leaky transistors, which enabled it to reach some of the highest clock frequencies in the industry. Apple on the other hand designed for power efficiency first, so its use of such transistors was far more conservative. Rather than use faster transistors, Apple would restrict itself to the slower transistors, but use more of them, resulting in wider core designs that have higher IPC and matched the performance of some of the best AMD designs while using less power. AMD recently adopted some of Apple’s restraint when designing the Zen 5c variant of its architecture, but it is just a modification of a design that was designed for significant use of leaky transistors for high clock speeds:

https://www.tomshardware.com/pc-components/cpus/amd-dishes-m...

The resulting clock speeds of the M4 and the Ryzen AI 340 are surprisingly similar, with the M4 at 4.4GHz and the Ryzen AI 340 at 4.8GHz. That said, the same chip is used in the Ryzen AI 350 that reaches 5.0GHz.

There is also the memory used. Apple uses LPDDR5X on the M4, which runs at lower voltages and has tweaks that sacrifice latency to an extent for a big savings in power. It also is soldered on/close to the CPU/SoC for a reduction needed in power to transmit data to/from the CPU. AMD uses either LPDDR5X or DDR5. I have not kept track of the difference in power usage between DDR versions and their LP variants, but expect the memory to use at least half the power if not less. Memory in many machines can use 5W or more just at idle, so cutting memory power usage can make a big impact.

Additionally, x86 has a decode penalty compared to other architectures. It is often stated that this is negligible, but those statements began during the P4 era when a single core used ~100W where a ~1W power draw for the decoder really was negligible. Fast forward to today where x86 is more complex than ever and people want cores to use 1W or less, the decode penalty is more relevant. ARM, using fixed length instructions and having a fraction of the instructions, uses less power to decode its instructions, since its decoder is simpler. To those who feel compelled to reply to repeat the mantra that this is negligible, please reread what I wrote about it being negligible when cores use 100W each and how the instruction set is more complex now. Let’s say that the instruction decoder uses 250mW for x86 and 50mW for ARM. That 200mW difference is not negligible when you want sub-1W core energy usage. It is at least 20% of the power available to the core. It does become negligible when your cores are each drawing 10W like in AMD’s desktops.

Apple also has taken the design choice of designing its own NAND flash controller and integrating it into its SoC, which provides further power savings by eliminating some of the power overhead associated with an external NAND flash controller. Being integrated into the SoC means that there is no need to waste power on enabling the signals to travel very far, which gives energy savings, versus more standard designs that assume a long distance over a PCB needs to be supported.

Finally, Apple implemented an innovation for timer coalescing in Mavericks that made a fairly big impact:

https://www.imore.com/mavericks-preview-timer-coalescing

On Linux, coalescing is achieved by adding a default 50ms slack to traditional Unix timers. This can be changed, but I have never seen anyone actually do that:

https://man7.org/linux/man-pages/man2/pr_set_timerslack.2con...

That was done to retroactively support coalescing in UNIX/Linux APIs that did not support it (which were all of them). However, Apple made its own new API for event handling called grand central dispatch that exposed coalescing in a very obvious way via the leeway parameter while leaving the UNIX/BSD APIs untouched, and this is now the preferred way of doing event handling on MacOS:

https://developer.apple.com/documentation/dispatch/1385606-d...

Thus, a developer of a background service on MacOS that can tolerate long delays could easily set the slack to multiple seconds, which would essentially guarantee it would be coalesced with some other timer, while a developer of a similar service on Linux, could, but probably will not, since the scheduler slack is something that the developer would need to go out of his way to modify, rather than something in his face like the leeway parameter is with Apple’s API. I did check how this works on Windows. Windows supports a similar per timer delay via SetCoalescableTimer(), but the developer would need to opt into this by using it in place of SetTimer() and it is not clear there is much incentive to use it. To circle back not Chrome, it uses libevent, which uses the BSD kqueue on MacOS. As far as I know, kqueue does not take advantage of timer coalescing on macOS, so the mavericks changes would not benefit chrome very much and the improvements that do benefit chrome are elsewhere. However, I thought that the timer coalescing stuff was worthwhile to mention given that it applies to many other things on MacOS.

I think the Ryzen ai max 395+ gets really close in terms of performance per watt.

It isn't.

https://imgur.com/a/yvpEpKF

In single threaded CPU performance, M4 Pro is roughly 3.6x more efficient while also being 50% faster.

Then the m5 is going to be even more of a beast.

s/x84/x86/

>s/x84/x86/

TIL:

https://en.wikipedia.org/wiki/Monopole_(company)#Racing_cars

I was kind of hoping that there was some little-known x84 standard that never saw the light of day, but instead all I found was classic French racing cars.

> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

Is that your metric of performance? If so...

  $ sudo cpufreq-set -u 50MHz
done!

poverty

Hardware performance literally doesn't matter if your software doesn't use it. The more soc-like design of the M series essentially results in an easier time for performance developers. x86 vendors are fighting a losing battle until they change their image of what a x86 based computer should look like. you aren't going to beat apple insiders, x86 vendors have a market opportunity here, but they've had it for 2 decades at this point and they have refused to switch, so they are likely incapable and will die. Sad.

Apple designs their laptops to throttle power when they warm up too much. Framework gives theirs a fan.

It's a design choice.

Also, different Linux distros/DEs prioritize different things. Generally they prioritize performance over battery life.

That being said, I find Debian GNOME to be the best on battery life. I get 6 hours on an MSI laptop that has an 11th gen Intel processor and a battery with only 70% capacity left. It also stays cool most of the time (except gaming while being plugged in) but it does have a fan...

M1’s efficiency/thermals performance comes from having hardware-accelerated core system libraries.

Imagine that you made an FPGA do x86 work, and then you wanted to optimize libopenssl, or libgl, or libc. Would you restrict yourself to only modifying the source code of the libraries but not the FPGA, or would you modify the processor to take advantage of new capabilities?

For made-up example, when the iPhone 27 comes out, it won’t support booting on iOS 26 or earlier, because the drivers necessary to light it up aren’t yet published; and, similarly, it can have 3% less battery weight because they optimized the display controller to DMA more efficiently through changes to its M6 processor and the XNU/Darwin 26 DisplayController dylib.

Neither Linux, Windows, nor Intel have shown any capability to plan and execute such a strategy outside of video codecs and network I/O cards. GPU hardware acceleration is tightly controlled and defended by AMD and Nvidia who want nothing to do with any shared strategy, and neither Microsoft nor Linux generally have shown any interest whatsoever in hardware-accelerating the core system to date — though one could theorize that the Xbox is exempt from that, especially given the Proton chip.

I imagine Valve will eventually do this, most likely working with AMD to get custom silicon that implements custom hardware accelerations inside the Linux kernel that are both open source for anyone to use, and utterly useless since their correct operation hinges on custom silicon. I suspect Microsoft, Nintendo, and Sony already do this with their gaming consoles, but I can’t offer any certainty on this paragraph of speculation.

x86 isn’t able to keep up because x86 isn’t updated annually across software and hardware alike. M1 is what x86 could have been if it was versioned and updated without backwards compatibility as often as Arm was. it would be like saying “Intel’s 2026 processors all ship with AVX-1024 and hardware-accelerated DMA, and the OS kernel (and apps that want the full performance gains) must be compiled for its new ABI to boot on it”. The wreckage across the x86 ecosystem would be immense, and Microsoft would boycott them outright to try and protect itself from having to work harder to keep up — just like Adobe did with Apple M1, at least until their userbase starting canceling subscriptions en masse.

That’s why there are so many Arm Linux architectures: for Arm, this is just a fact of everyday life, and that’s what gave the M1 such a leg up in x86: not having to support anything older than your release date means you can focus on the sort of boring incremental optimizations that wouldn’t be permissible in a “must run assembly code written twenty years ago” environment assumed by Lin/Win today.

This isn't really true. Linux doesn't use any magic accelerators yet it runs very fast on Apple Silicon. It's just the best processor.

P/E cores do benefit from software tuning, but aside from that it's almost all hardware.

The GPU is significantly different from other desktop GPUs but it's in principle like other mobile GPUs, so not sure how much better Linux could be adapted there.

iOS 26 comes out this year.

macOS releases still work fine on intel based macs.

On my ryzen laptop, i have to manually ensure that Linux is setting the right power settings. Once i do that, my 5950HS laptop from 2022 is completely competitive with my work MacBook M2. Louder and hotter at full tilt, but it also has a better GPU (even with the onboard Nvidia turned off) and i can get ~6 hours of web dev out of it if I'm not constantly churning tons of files.

I would try it with Windows for a better comparison, or get into the weeds of getting Linux to handle the ryzen platform power settings better.

With Ubuntu properly managing fans and temps and clocks, I'll take it over the Mac 10/10 times.

>My daily workhorse is a M1 Pro that I purchased on release date, It has been one of the best tech purchases I have made

Same, I just realized it's three years old, I've used every day for hours and it still feels like the first day I got it.

They truly revindicated on this as their laptops were getting worse and worse and worse (keyboard fiasco, touchbar, ...).

My M1 Macbook Pro I used at work for several months until the Ubuntu Ryzen 7 7840U P14s w/32GB RAM arrived didn't seem particularly amazing.

The only real annoying thing I've found with the P14s is the Crowdstrike junk killing battery life when it pins several cores at 100% for an hour. That never happened in MacOS. These are corporate managed devices I have no say in, and the Ubuntu flavor of the corporate malware is obviously far worse implemented in terms of efficiency and impact on battery life.

I recently built myself a 7970X Threadripper and it's quite good perf/$ even for a Threadripper. If you build a gaming-oriented 16c ryzen the perf/$ is ridiculously good.

No personal experience here with Frameworks, but I'm pretty sure Jon Blow had a modern Framework laptop he was ranting a bunch about on his coding live streams. I don't have the impression that Framework should be held as the optimal performing x86 laptop vendor.

> That never happened in MacOS

Oh you've gotten lucky then. Or somehow disabled crowdstrike.

That could be because crowdstrike is not inside the XNU kernel anymore:

https://www.crowdstrike.com/en-us/blog/crowdstrike-supports-...

They happily implement a userland version on macOS, but then claimed that being in the kernel is absolutely necessary on Windows after they disabled all Windows machines using it.

> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

I've got the Framework 13 with the Ryzen 5 7640U and I routinely have dozens of tabs open, including YouTube videos, docker containers, handful of Neovim instances with LSPs and fans or it getting hot have never been a problem (except when I max out the CPU with heavy compilation).

The issue you're seeing isn't because x86 is lacking but something else in your setup.

I don't know, but I suspect the builds of the programs you're using play a huge factor in this. Depending on the Linux distro and package management you're using, you just might not be getting programs that are compiled with the latest x86_64 optimizations.

[deleted]

One is more built from the ground up more recently than the other.

Looking beyond Apple/Intel, AMD recently came out with a cpu that shares memory between the GPU and CPU like the M processors.

The Framework is a great laptop - I'd love to drop a mac motherboard into something like that.

Most probably it is not impacting on Microsoft sales?

All Ryzen mobile chips (so far) use a homogeneous core layout. If heat/power consumption is your concern, AMD simply hasn't caught up to the Big.little architecture Intel and Apple use.

In terms of performance though, those N4P Ryzen chips have knocked it out of the park for my use-cases. It's a great architecture for desktop/datacenter applications, still.

Sort of. Technically the Ryzen 5 AI 340 has 3 Zen 5 cores and 3 Zen 5c cores. They are more similar than the power/efficiency cores of Apple/Intel but 5c cores are more power efficient.

[deleted]

There is one positive to all of this. Finally, we can stop listening to people who keep saying that Apple Silicon is ahead of everyone else because they have access to better process. There are now chips on better processes than M1 that still deliver much worse performance per watt.

Go down the rabbit hole of broken compiler settings for debian default builds, if you want to see how much low-hanging fruit we still have.

Who here would be interested in testing a distro like debian with builds optimized for the Framework devices?

Should .. should I install gentoo?

The answer is always yes, continously.

Because of a random anecdote on hackwrnews?

Not sure why you'd think that, comparing a heterogeneous core architecture to a homogeneous one. Mobile Ryzen chips aren't designed for power efficiency, if you want a "fair" comparison then pull up a Big.little x86 chip or benchmark Apple's performance cores vs AMD's mobile chipsets.

Once you normalize for either efficiency cores or performance cores, you'll quickly realize that the node lead is the largest advantage Apple had. Those guys were right, the writing was on the wall in 2019.

I guess that’s the new excuse. Except it doesn’t work. I can off-line all the efficiency cores on my M1 laptop and still run circles around the new x86 stuff in performance per watt.

Well don't just tell me about it, show me. Link the Geekbench results when its done running.

> I am sorely disappointed, using the Framework feels like using an older Intel based Mac. If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

A big thing is storage. Apple uses extremely fast storage directly attached to the SoC and physically very very close. In contrast, most x86 systems use storage that's socketed (which adds physical signal runtime) and that goes via another chip (southbridge). That means, unlike Mac devices that can use storage as swap without much practical impact, x86 devices have a serious performance penalty.

Another part of the issue when it comes to cooling is that Apple is virtually the only laptop manufacturer that makes solid full aluminium frames, whereas most x86 laptops are made out of plastic and, for higher-end ones, magnesium alloy. That gives Apple the advantage of being able to use the entire frame to cool the laptop, allowing far more thermal input before saturation occurs and the fans have to activate.

> A big thing is storage. Apple uses extremely fast storage directly attached to the SoC and physically very very close. In contrast, most x86 systems use storage that's socketed (which adds physical signal runtime) and that goes via another chip (southbridge).

Why would PCIe SSDs need to go through a southbridge? The CPU itself provides PCIe lanes that can be used directly.

> That means, unlike Mac devices that can use storage as swap without much practical impact, x86 devices have a serious performance penalty.

Swap is slow on all hardware. No SSD comes close to the speed of RAM - not even Apple's. Latency is also significantly worse when you trigger a page fault and then need to wait for the page to load from disk before the thread can resume execution.

> The CPU itself provides PCIe lanes that can be used directly.

It does, but if you look at the mainboard manuals of computers, usually it's 32 lanes of which 16 go to the GPU slot and 16 to the southbridge, so no storage directly attached to the CPU. Laptops are just as bad.

Intel has always done price segmentation with the number of PCIe lanes exposed to the world.

Threadripper AMD CPUs are a different game, but I'm not aware of anyone, even "gamer" laptops, sticking such a beast into a portable device.

> Latency is also significantly worse when you trigger a page fault and then need to wait for the page to load from disk before the thread can resume execution.

Indeed, but the difference in performance between an 8GB Windows laptop and an 8GB M-series Apple laptop is noticeable, even if all it's running is the base OS and Chrome with a few dozen tabs.

> It does, but if you look at the mainboard manuals of computers, usually it's 32 lanes of which 16 go to the GPU slot and 16 to the southbridge, so no storage directly attached to the CPU. Laptops are just as bad.

Why would the southbridge need a whole 16 lanes? That's 32 GB/s of bandwidth (or 64, if PCIe 5). My (AMD) motherboard has the GPU and two M.2 sockets connected directly to the CPU and it's one of the cheaper ones. No idea about my laptop but I expect it to be similar because it's also AMD. Intel is obviously different here because they're more stingy with PCIe lanes.

There should be no reason for a laptop with only an integrated GPU to dangle storage off the southbridge. They take at most 4 lanes and can work with less.

> Indeed, but the difference in performance between an 8GB Windows laptop and an 8GB M-series Apple laptop is noticeable, even if all it's running is the base OS and Chrome with a few dozen tabs.

Any Windows laptop that comes with 8GB of RAM is going to have a crappy SSD included because those are always built to be cheap, not performant. It could even be a SATA SSD (500MB/s bandwidth max). Most likely they'd come with a processor significantly slower and a decent chance the RAM would also be single channel, too.

> It does, but if you look at the mainboard manuals of computers, usually it's 32 lanes of which 16 go to the GPU slot and 16 to the southbridge, so no storage directly attached to the CPU.

AFAIK that's not the case at least on AMD (not Threadripper, but the mainstream AM5 socket). They have 28 lanes of which 16 go to the GPU slot, 4 go to the southbridge, 4 are dedicated to M.2 NVMe storage, and 4 go to either another PCIe slot or another M.2 NVMe storage. See for a random example this motherboard manual https://download.asrock.com/Manual/B650M-HDVM.2.pdf which has a block diagram on page 8 (page 12 of the PDF).

They haven’t beat the low morale out of their workforce yet.

RISC vs CISC. Why you think a mainframe is so fast?

ARM is great. Those M are the only thing I could buy used and put Linux on it.

> RISC vs CISC. Why you think a mainframe is so fast?

This hasn't been true for decades. Mainframes are fast because they have proprietary architectures that are purpose-built for high throughput and redundancy, not because they're RISC. The pre-eminent mainframe architecture these days (z/Architecture) is categorized as CISC.

Processors are insanely complicated these days. Branch prediction, instruction decoding, micro-ops, reordering, speculative execution, cache tiering strategies... I could go on and on but you get the idea. It's no longer as obvious as "RISC -> orthogonal addressing and short instructions -> speed".

> The pre-eminent mainframe architecture these days (z/Architecture) is categorized as CISC.

Very much so. It's largely a register-memory (and indeed memory-memory) rather than load-store architecture, and a direct descendant of the System/360 from 1964.

Everything is RISC after it gets decoded. It isn’t 1990 anymore. The decoder costs maybe 1% performance.

In Haswell, 4.8w out of 22.1w for the core were used for the decoder for integer/ALU instructions[0]. According to this[1] analysis of the entire ubuntu repository, 89% of all instructions were composed of just 12 instructions (all integer/ALU).

From this we can infer that for most normal workloads, almost 22% of the Haswell core power was used in the decoder. As decoders have gotten wider and more complex in recent designs, I see no reason why this wouldn't be just as true for today's CPUs.

[0] https://www.usenix.org/system/files/conference/cooldc16/cool...

[1] https://oscarlab.github.io/papers/instrpop-systor19.pdf

I thought people stopped believing this around 2005 when Apple users finally had to admit that PPC was behind x86.

Even though this was the case for the most part during the entire history of PPC Macs (I owned two during these years)

https://chipsandcheese.com/p/arm-or-x86-isa-doesnt-matter

Cheese and Chips makes some bad arguments in that article.

Their claim that ARM decoders are just as complex wasn't true then and is even less true now. ARM reduced decoder size 75% from A710 to A715 by dropping legacy 32-bit stuff. Considering that x86 is way more complex than 32-bit ARM, the difference between an x86 and ARM decoder implementation is absolutely massive.

They abuse the decoder power paper (and that paper also draws a conclusion its own data doesn't support). The data shows that for integers/ALU, some 22% of total core power is used by the decoder for integer/ALU workloads. As 89% of all instructions in the entire Ubuntu repos are just 12 integer/ALU instructions, we can infer that the power cost of the decoder is significant (I'd consider nearly a quarter of the total power budget to be significant anyway).

The x86 decoder situation has gotten worse with Golden Cove (with 6 decoders) being infamous for its power draw and AMD fearing power draw so much that they opted for a super-complex dual 4-wide decoder setup. If the decoder power didn't matter, they'd be doing 10-wide decoders like the ARM designers.

The claim that ARM uses uops too is somewhere between a red herring and false equivalency. ARM uops are certainly less complex to create (otherwise they'd have kept around the uop cache) and ARM instructions being inherently less complex means that uop encoding is also going to be more simple for a given uarch compared to x86.

They then have an argument that proves too much when they say ARM has bloat too. If bloat doesn't matter, why did ARM make an entirely new ISA that ditches backward compatibility? Why take any risk to their ecosystem if there's no reward?

They also skip over the fact that objectively bad design exists. NOBODY out there defends branch delay slots. They are universally considered an active impediment to high-performance designs with ISAs like MIPS going so far as to create duplicate instructions without branch delay slots in order to speed things up. You can't argue that ISA definitely matters here, but also argue that ISA never makes any difference at all.

The "all ISAs get bloated over time" is sheer ignorance. x86 has roots going back to the early 1970s before we'd figured out computing. All the basics of CPU design are now stable and haven't really changed in 30+ years. x86 has x87 which has 80-bits because IEEE 754 didn't exist yet. Modern ISAs aren't repeating that mistake. x86 having 8 registers isn't a mistake they are going to make. Neither is 15 different 128-bit SIMD extensions or any of the many other bloated mess-ups x86 has made over the last 50+ years. There may be mistakes, but they are almost certainly going to be on fringe things. In the meantime, the core instructions will continue to be superior to x86 forever.

They also fail to address implementation complexity. Some of the weirdness of x86 like tighter memory timing gets dragged through the entire system complicating things. If this results in just 10% higher cost and 10% longer development time, that means a RISC company could develop a chip for $5.4B over 4.5 years instead of $6B over 5 years which represents a massive savings and a much lower opportunity cost while giving a compounding head-start on their x86 competitor that can be used to either hit the market sooner or make even larger performance jumps each generation.

Finally, optimizing something like RISC-V code is inherently easier/faster than optimizing x86 code because there is less weirdness to work around. RISC-V basically just has one way to do something and it'll always be optimized while x86 often has different ways to do the same thing and each has different tradeoffs that make sense in various scenarios.

As to PPC, Apple didn't sell enough laptops to pay for Motorola to put enough money into the designs to stay competitive.

Today, Apple macbooks + phones move nearly 220M chips per year. For comparison, total laptop sales last year were around 260M. If Apple had Motorola make a chip today, Motorola would have the money to build a PPC chip that could compete with and surpass what x86 offers.

Fair enough.

And don’t forget that Apple can do things like completely remove all of the hardware that supports 32 bit code and tell developers to just deal with it.

It especially doesn't matter because the latest x86 update adds a mode that turns it into ARM.

https://www.intel.com/content/www/us/en/developer/articles/t...

RISC lost its meaning once SPARC added an integer multiply instruction.

At least my G5 helped keep my room warm in the winter.

Its fun watching things swing back and forth over time. I remember having those Sun mini-fridge size servers, all running RISC sparc based CPU's if I remember correctly. I wonder if there would be some merit in RISC based linux servers, like maybe the power consumption is lower? I forget the pros/cons of RISC vs CISC CPUs.

To me it simply looks like Apple buys out the first year of every new TSMC node and that is the main reason why the M series is more efficient. Strix Halo (N4P) has, according to Wikipedia, a transistor density about 140 MTr/mm2, while the M4 (N3E) has about 210 MTr/mm2. Isn't the process node alone enough to explain the difference? (+ software optimizations in MacOS of course)

> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

Change TDP, TDC, etc. and fan curves if you don't like the thermal behavior. Your Ryzen has low enough power draw that you could even just cool it passively. It has a lower power draw ceiling than your M1 Pro while exceeding it in raw performance.

Also comparing chips based on transistor density is mostly pointless if you don't also mention die size (or cost).

I wonder what is the difference between efficiency of MacBook display vs Framework laptop. Whilst CPU and GPU take considerable power they aren't usually working at 100% utilization. Display however has to be using power all the time, possibly at high brightness in daytime. MacBooks have (all?) high resolution displays which should be much power hungrier than Framework 13 IPS. Pro models use mini LED, which needs even more power.

I did ask LLM for some stats about this. According to Claude Sonnet 4 through VS Code (for what that's worth), my Macbook's display can consume same or even more power than CPU does for "office work". Yet my M1 Max 16" seems to last a good while longer than whatever it was I got from work this year. I'd like to know how those stats are produced (or are they hallucinated...). There doesn't seem to be a way to get display's power usage in M series Macs. So, you'd need to devise a testing regime for display off and display on 100% brightness to get some indication of its effect on power use.