Battery efficiency comes from a million little optimizations in the technology stack, most of which comes down to using the CPU as little as possible. As such the instruction set architecture and process node aren't usually that important when it comes to your battery life.

If you fully load the CPU and calculate how much energy a AI340 needs to perform a fixed workload and compare that to a M1 you'll probably find similar results, but that only matters for your battery life if you're doing things like blender renders, big compiles or gaming.

Take for example this battery life gaming benchmark for an M1 Air: https://www.youtube.com/watch?v=jYSMfRKsmOU. 2.5 hours is about what you'd expect from an x86 laptop, possibly even worse than the fw13 you're comparing here. But turn down the settings so that the M1 CPU and GPU are mostly idle, and bam you get 10+ hours.

Another example would be a ~5 year old mobile qualcomm chip. It's a worse process node than an AMD AI340, much much slower and significantly worse performance per watt, and yet it barely gets hot and sips power.

All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.

> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding. I've had to enable gpu video decoding on my fw16 and haven't noticed the fans on youtube.

A huge reason for the low power usage is the iPhone.

Apple spent years incrementally improving efficiency and performance of their chips for phones. Intel and AMD were more desktop based so power efficiency wasnt the goal. When Apple's chips got so good they could transition into laptops, x86 wasn't in the same ballpark.

Also the iPhone is the most lucrative product of all time (I think) and Apple poured a tonne of that money into R&D and taking the top engineers from Intel, AMD, and ARM, building one of the best silicon teams.

Apple purchased Palo Alto Semi which made the biggest difference. One of their best acquisitions ever in my opinion… not that they make all that many of those anyway.

Apple actually makes a lot more acquisitions than you think, but they are rarely very high profile/talked about: https://en.wikipedia.org/wiki/List_of_mergers_and_acquisitio...

> One of their best acquisitions ever in my opinion…

NeXT? But yes, I completely get what you’re saying, I just couldn’t resist. It was an amazingly long sighted strategic move, for sure.

> and Apple poured a tonne of that money into R&D and taking the top engineers from Intel, AMD, and ARM, building one of the best silicon teams.

how much silicon did Apple actually create? I thought they outsourced all the components?

They bought Palo Alto Semiconductor in 2008 which is where all their ARM chip designs came from.

https://en.wikipedia.org/wiki/P.A._Semi

Outsourced to who? The only companies with the engineers you’d need are the other CPU makers like Intel, AMD, Qualcomm, and Nvidia. And none of them make a CPU as efficient as Apple does.

cpu yes, but what about the rest of the iphone?

They design much more in house than any other smartphone brand, except maybe Samsung.

CPU, GPU, neural processor, image signal processor, U1 chip for device tracking, Secure Enclave for biometrics, a 5G modem (only used in the 16e so far)…

They don’t manufacture the chips in house of course. They contract that out to TSMC and other companies.

Arm exists, it is unknown how much tech apple gets from Arm.

Arm licenses their designs to everybody. They are okay, but you are never going to make market leading processors by using the Arm designs.

The M1 and M2 were beating the best-in-class i7 when they were relased IIRC

Apple took the ARM base design (they licensed it), and then they modified and tweaked it.

You get the ARM ISA, and compilers that work for ARM will compile to Apple Silicon. It's just that the actual hardware you get, is better than the base design, and therefore beats other ARM processors in benchmarks.

> Apple took the ARM base design (they licensed it), and then they modified and tweaked it.

More likely it was derived from PWRficient, or a clean sheet design that took lessons from it.

It's more than that. They have an unlimited license to arm designs, and can change them as they see fit, since they were an early investors (or something along those lines). Other manufacturers can't get these terms, or if they can, it will be prohibtly expensive

Apple has an architectural license that lets them build their own ARM cores:

https://www.electronicsweekly.com/news/business/finance/arm-...

It is very unlikely Apple uses anything from ARM’s core designs, since that would require paying an additional license fee and Apple was able to design superior cores using its architectural license.

Yep, Apple was a significant early investor in ARM. https://appleinsider.com/articles/23/09/05/apple-arm-have-be...

The thing about Apple having a “special license” due to being a partial founder of Arm is an urban legend. They have an architectural license, just like several other companies making custom Arm CPUs do.

Yeah, why would ARM prevent other companies from paying more for the better license?

All they care about is that companies buy an ARM license, not that they use the boilerplate ARM CPU design.

Those designs are there to make it easier for companies to make ARM-based chips who would otherwise never be able to design their own.

[deleted]

And tsmc (and therefore asml etc), usually apple reserves the newest upcoming node for their own production.

Besides Apple's SoCs they also have made dedicated silicon for secure enclaves, wifi, bluetooth, ultra-wideband, and cellular radios, and motion coprocessors.

Apple bought PA Semi a long time ago. They have a significant silicon development group. Their architecture license (they were an early investor in ARM) for ARM means they get to basically do whatever they want using the ARM ISA. The SoCs in pretty much all their devices are designed in-house.

Were they ARM investors at the time they needed CPU for Newton? Was that before or after e.g. iPaq PDA-s? And latter - was it that it looked that Apple maybe in danger of going under, and then they sold their ARM stake and got a cash injection that way?

I remember iPaq PDA fondly. Wrote a demo to select a song from a playlist with few thousand author-album-song with voice query. The WiFi add-on was a big plastic "sleeve", that the iPaq slid into, not the other way around. Could run the ASR engine for about whole 10 mins before it drained the battery flat, haha. :-)

IIRC Apple originally invested in ARM during the development of the Newton. The original Newtons used ARM 610 CPUs. I don't know exactly when they sold their ARM stake but they kept their architecture license.

The Newton was long before the iPaq, the MessagePad was released in 1993.

what about all the components and sensors

Apple has bought startups with various technologies like Anobit, that developed advanced flash memory controllers, and have funded development efforts by partners. For example Apple worked hand in glove with Sharp to develop the tech for their 5K display panels. They also now have their own cellular chip designs in some models, in their quest for independence from Qualcomm. That’s all from memory, I’m sure there are many more examples.

I vaguely remember Intel tried to get into the low power / smartphone / table space at the time with their Atom line [0] in the late 00's, but due to core architecture issues they could never reach the efficiency of ARM based chips.

[0] https://en.wikipedia.org/wiki/Intel_Atom

I don't think it was core architecture issues. My impression is that over the years their efforts to get into low-power devices never got the full force of their engineering prowess.

I worked for an IP vendor that was in some Atom SoCs (over a decade ago now though) - from what I remember the perf/w was actually pretty competitive for contemporary ARM devices when we supplied the IP, but then took so long to actually end up in products it ended up behind others - other customers were already on the next generation by that point, even if the initial projects started at about the same time. And the atoms were buggy as hell, never had more problems with dumb cache/fabric/memory controller issues.

To me the Atom team always felt like a dead-end inside intel - everyone seemed to be trying to get in to a different higher-status team ASAP - our engineering contacts often changed monthly, if we even knew who our "contacts" were meant to be at any time. I think any product developed like that would struggle.

I thought they just acquired P.A. Semi, job done.

When they bought PA Semi the company worked on IBM Power architecture chips. It was very much the team Apple was after, not any one particular technology.

that was a part of it, yes.

but do not forget how focused they (amd/intel, esp in opteron days -- edit) were on the server market.

I don't think it is so much efficiency of their chips for their hardware (phones) so much as efficiency of their OS for their chips and hardware design (like unified memory).

It is likely the hardware effiency of their chips. Apple SoCs running industry-standard benchmarks still run very cool, yet still show dominant performance. The OS efficiency helps, but even under extreme stress tests like SPEC, the Apple SoCs dominate in perf & power.

See Lunar Lake on TSMC N3B, 4+4, on-package DRAM versus the M3 on TSMC N3B, 4+4, on-package DRAM: https://youtu.be/ymoiWv9BF7Q?t=531

The 258V (TSMC N3B) has a worse perf / W 1T curve than the Apple M1 (TSMC N5).

> It is likely the hardware effiency of their chips. Apple SoCs running industry-standard benchmarks still run very cool, yet still show dominant performance

Dieselgate?

I have heard that Apple Silicon chips are designed around the retain-release cycle that goes back to NeXT and is still here today (hidden by ARC compilation), but I don't think that's the whole story. Back when the M1's came out, many benchmarks showed virtualized Windows blowing the doors off of market-equivalent x86 CPUs.

Also, there's the obvious benefits of being TSMC's best customer. And when you design a chip for low power consumption, that means you've got a higher ceiling when you introduce cooling.

The SoC benefits are being ignored by some people here. Apple doesn't control every piece of software as some here posit, however, OS optimizations and utilization of extra-efficiency cores (though still requiring SoC design they do also need specific OS code support) are part of the performance.

Textbook Innovator’s Dilemma.

> A huge reason for the low power usage is the iPhone.

No, the main reason for better battery life is the RISC architecture. PC on ARM architecture has the same gains.

Any downvoters care to actually leave me a reply telling me why?

Im not wrong!

Because it’s a take thst sounds like someone who has been reading comp.sys.mac.advocacy from 1995 when the PPC vs x86 wars were going on (and when PPC chips were already behind in performance) up through 2005 when Apple gave up and went to Intel.

> All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.

Apple is vertically integrated and can optimize at the OS and for many applications they ship with the device.

Compare that to how many cooks are in the kitchen in Wintel land. Perfect example is trying to get to the bottom of why your windows laptop won't go to sleep and cooks itself in your backpack. Unless something's changed, last I checked it was a circular firing squad between laptop manufacturer, Microsoft and various hardware vendors all blaming each other.

> Apple is vertically integrated and can optimize

> Compare that to how many cooks are in the kitchen in Wintel land. Perfect example is trying to get to the bottom of why your windows laptop won't go to sleep and cooks itself in your backpack

So, I was thinking like this as well, and after I lost my Carbon X1 I felt adventurous, but not too adventurous, and wanted a laptop that "could just work". The thinking was "If Microsoft makes both the hardware and the software, it has to work perfectly fine, right?", so I bit my lip and got a Surface Pro 8.

What a horrible laptop that was, even while I was trialing just running Windows on it. Overheated almost immediately by itself, just idling, and STILL suffers from the issue where the laptop sometimes wake itself while in my backpack, so when I actually needed it, of course it was hot and without battery. I've owned a lot of shit laptops through the years, even some without keys in the keyboard, back when I was dirt-poor, but the Surface Pro 8 is the worst of them all, I regret buying it a lot.

I guess my point is that just because Apple seem really good at the whole "vertically integrated" concept, it isn't magic by itself, and Microsoft continues to fuck up the very same thing, even though they control the entire stack, so you'll still end up with backpack laptops turning themselves on/not turning off properly.

I'd wager you could let Microsoft own every piece of physical material in the world, and they'd still not be able to make a decent laptop.

Surprised to hear this. Back in the Surface Pro 4 days, the hardware was great. I made it through college doing 95% of my work on a Surface Pro 4 tablet with the magnetic keyboard and almost always made it through the entire day without having to plug it in.

My wife swears by her surface pros, and she has owned a few.

I've had a few Surface Book 2's for work, and they were fine except: needed more RAM, and there was some issue with connection between screen and base which make USB headsets hinky.

Apple has been vertically integrate for 50 years. Microsoft has been horizontally integrated for 50 years.

That's why Apple is good at making a whole single system that works by itself, and Microsoft is good at making a system that works with almost everything almost everyone has made almost ever.

Also on the HN front page today:

> Framework 16

> The 2nd Gen Keyboard retains the same hardware as the 1st Gen but introduces refreshed artwork and updated firmware, which includes a fix to prevent the system from waking while carried in a bag.

There are some reports of this with Macbooks as well. But my (non-scientific) impression is that a lot more people in Wintel land are seeing it. All of my work laptops, and a few of my personal laptops have done this to me since I started using Windows 10/11.

Microsoft is pushing "Modern Standby" over actual sleep, so laptops can download and install updates while closed at night.

Microsoft is pushing "Modern Standby" over actual sleep, so laptops can download and install updates while closed at night.

Apple has this. It's called Power Nap. But for some reason, it doesn't cause the same problems reported by people here on HN.

I remember a time when this was supposed to be Wintel's advantage. It's really strange to now be in a time where Apple leads the consumer computing industry in hardware performance, yet is utterly failing at evolving the actual experience of using their computers. I'm pretty sure I'm not the only one who would gladly give up a bit of performance if it were going to result in a polished, consistent UI/UX based on the actual science of human interface design rather than this usability hellscape the Alan Dye era is sending us into.

macOS is a resource hungry pig, I wouldn't bet too much on it making a difference.

> It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding. I've had to enable gpu video decoding on my fw16 and haven't noticed the fans on youtube.

I've worked in video delivery for quite a while.

If I were to write the law, decision-makers wilfully forcing software video decoding where hardware is available would be made to sit on these CPUs with their bare buttocks. If that sounds inhumane, then yes, this is the harm they're bringing upon their users, and maybe it's time to stop turning the other cheek.

I run Linux Mint Mate on a 10 year old laptop. Everything works fine, but watching YouTube makes my wireless USB dongle mouse stutter a LOT. Basically if CPU usage goes up, mouse goes to hell.

Are you telling me that for some reason it's not using any hardware acceleration available while watching YouTube? How do I fix it?

It's probably the 2.4GHz WiFi transmitter interfering with the 2.4GHz mouse transmitter. You probably notice it during YouTube because it's constantly downloading. Try a wired mouse.

Interesting theory. The wired mouse is trouble free, but I figured that's because of a better sampling rate and less overhead over all. Maybe I'll try a bluetooth mouse or some other frequency, or the laptop on fired Ethernet to see if the theory pans out.

Or just switch to 5GHz or 6GHz range.

Easiest way is to use Chrome or a Chrome based browser since they bundle codecs with the browser. If you're using Firefox, need to make sure you have the codecs. I know nothing about Mint specifically though to know if they'd automatically install codecs or not.

You specifically don't want to use the bundled codecs since those would be CPU decode only.

Interesting. I'll look into that more.

Im using Brave and it seems the enable hardware acceleration box is checked.

  All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.
This isn't true. Yes, uncore power consumption is very important but so is CPU load efficiency. The faster the CPU can finish a task, the faster it can go back to sleep, aka race to sleep.

Apple Silicon is 2-4x more efficient than AMD and Intel CPUs during load while also having higher top end speed.

Another thing that makes Apple laptops feel way more efficient is that they use a true big.Little design while AMD and Intel's little cores are actually designed for area efficiency and not power efficiency. In the case of Intel, they stuff as many little cores as possible to win MT benchmarks. In real world applications, the little cores are next to useless because most applications prefer a few fast cores over many slow cores.

> Apple Silicon is 2-4x more efficient than AMD and Intel CPUs during load while also having higher top end speed.

This is not true. For high-throughput server software x86 is significantly more efficient than Apple Silicon. Apple Silicon optimizes for idle states and x86 optimizes for throughput, which assumes very different use cases. One of the challenges for using x86 in laptops is that the microarchitectures are server-optimized at their heart.

ARM in general does not have the top-end performance of x86 if you are doing any kind of performance engineering. I don't think that is controversial. I'd still much rather have Apple Silicon in my laptop.

  For high-throughput server software x86 is significantly more efficient than Apple Silicon.
In the server space, x86 has the highest performance right now. Yes. That's true. That's also because Apple does not make server parts. Look for Qualcomm to try to win the server performance crown in the next few years with their Oryon cores.

That said, Graviton is at least 50% of all AWS deployments now. So it's winning vs x86.

  ARM in general does not have the top-end performance of x86 if you are doing any kind of performance engineering. I don't think that is controversial.
I think you'll have to define what top-end means and what performance engineering means.

I dont think the point Amazon uses ARM was about performance but purely cost optimisation. At one point, nearly 40% of Intel's server revenue was coming from Amazon. They just figure it out at their scale it would be cheaper to do it themselves.

But I am purely guessing ARM has risen their price per core so it makes less financial sense to do a yearly update on CPU. They are also going into Server CPU business meaning they now have some incentives to keep it all to themselves. Which makes the Nvidia moves really smart as they decided to go for the ISA licences and do it by themselves.

> Apple Silicon is 2-4x more efficient than AMD and Intel CPUs during load while also having higher top end speed.

This is false, in cross platform tasks it's on par if not worse than latest X86 arches. As others pointed out: 2.5h in gaming is about what you'd expect from a similarly built X86 machine.

They are willing due to lower idle and low load consumption, which they achieve by integrating everything as much as possible - something that's basically impossible for AMD and Intel.

> The faster the CPU can finish a task, the faster it can go back to sleep, aka race to sleep.

May have been true when CPU manufacturers left a ton of headroom on the V/F curve, but not really true anymore. Zen 4 core's power draw shoots up sharply pass 4.6 GHz and nearly triples when you approach 5.5 GHz (compared to 4.6), are you gonna complete the task 3 times faster at 5.5 GHz?

  This is false, in cross platform tasks it's on par if not worse than latest X86 arches.
This is Cinebench 2024, a cross platform application: https://imgur.com/a/yvpEpKF

  They are willing due to lower idle and low load consumption, which they achieve by integrating everything as much as possible - something that's basically impossible for AMD and Intel.
Weird because LNL achieved similar idle wattage as Apple Silicon.[0] Why do you say it's impossible?

  May have been true when CPU manufacturers left a ton of headroom on the V/F curve, but not really true anymore. Zen 4 core's power draw shoots up sharply pass 4.6 GHz and nearly triples when you approach 5.5 GHz (compared to 4.6), are you gonna complete the task 3 times faster at 5.5 GHz?
Honestly not sure how your statement is relevant.

[0]https://www.notebookcheck.net/Dell-XPS-13-9350-laptop-review...

This is Cinebench 2025, a cross platform application: https://imgur.com/a/yvpEpKF

You sure like that table, don't you? Trying to find the source of that blender numbers, I came across many reddit posts of you with that exact same table. Sadly those also don't have a source - the are not from the notebookcheck source.

The reason why I keep reposting this table is because people post incorrect statements about AMD/Apple so often, often with zero data backing.

For Blender numbers, M4 Pro numbers came from Max Tech's review.[0] I don't remember where I got the Strix Halo numbers from. Could have been from another Youtube video or some old Notebookcheck article.

Anyway, Blender has official GPU benchmark numbers now:

M4 Pro: 2497 [1]

Strix Halo: 1304 [2]

So M4 Pro is roughly 90% faster in the latest Blender. The most likely reason for why Blender's official numbers favors M4 Pro even more is because of more recent optimizations.

Sources:

[0]https://youtu.be/0aLg_a9yrZk?si=NKcx3cl0NVdn4bwk&t=325

[1] https://opendata.blender.org/devices/Apple%20M4%20Pro%20(GPU...

[2] https://opendata.blender.org/devices/AMD%20Radeon%208060S%20...

Weren't we comparing CPUs though? Those Blender benchmarks are for GPUs.

Here is M4 Max CPU https://opendata.blender.org/devices/Apple%20M4%20Max/ - median score 475

Ryzen MAX+ PRO 395 shows median score 448 (can't link because the site does not seem to cope well with + or / in product names)

Resulting in M4 winning by 6%

  Weren't we comparing CPUs though? Those Blender benchmarks are for GPUs.
Yes, but I was asked about Blender GPU.

Blender CPU tasks are highly parallel. AMD's Ryzen Max 395 has great MT performance. It's generally 5-20% slower in CPU MT than the M4 Max depending on the application.

> Weird because LNL achieved similar idle wattage as Apple Silicon.[0] Why do you say it's impossible?

And where is LNL now? How's the company that produced it? Even under Pat Gelsinger they said that LNL is a one off and they're not gonna make any more of them. It's commercially infeasible.

> Honestly not sure how your statement is relevant.

How is you bringing up synthetics relevant to race to idle?

Regardless, a number of things can be done on Strix Halo to improve the performance, first would be switching to some optimized Linux distro, or at least the kernel. That would claw back 5-20% depending on the task. It would also improve single core efficiency, I've seen my 7945hx drop from 14-15w idle on Windows to about 7-8 on Linux, because Windows likes to jerk off the CCDs non stop and throw the tasks around willy nilly which causes the second CCD and I/O die to never properly idle.

  And where is LNL now? How's the company that produced it? Even under Pat Gelsinger they said that LNL is a one off and they're not gonna make any more of them. It's commercially infeasible.
Why does it matter that LNL is bad economically? LNL shows that it's definitely possible to achieve same idle or even better idle wattage than Apple Silicon.

  How is you bringing up synthetics relevant to race to idle?
I truly don't understand what you mean.

> All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.

A good demonstration is the Android kernel. By far the biggest difference between it and the stock Linux kernel is power management. Many subsystems down to the process scheduler are modified and tuned to improve battery life.

And the more relevant case for laptops is macOS, which is heavily optimized for battery life and power draw in ways that Linux just isn't, neither is Windows. A lot of the problems here can't actually be fixed by intel, amd, or anyone designing x86 laptops because getting that level of efficiency requires the ability to strongly lead the app developer community. It also requires highly competent operating system developers focusing on the issue for a very long time, and being able to co-design the operating system, firmware and hardware together. Microsoft barely cares about Windows anymore, the Linux guys only care about servers since forever, and that leaves Apple alone in the market. I doubt anything will change anytime soon.

>And the more relevant case for laptops is macOS, which is heavily optimized for battery life and power draw in ways that Linux just isn't, neither is Windows.

What are some examples of power draw savings that Linux is leaving on the table?

Power efficiency is very important to servers too, for cost instead of for battery life. But, energy is energy. Thus, I suspect that the power draw is in userland systems that are specific to desktop, like desktop environments. Thus, using a simpler desktop environment may be worthwhile.

It's important but not relative to performance. Perf/watt thinking has a much longer history in mobile and laptop spaces. Even in servers most workloads haven't migrated to ARM.

I used Ubuntu around 2015 - 2018 and got hit with a nasty defect around gnome online accounts integrations (please correct me if the words are wrong here). For some reason, it got stuck in a loop or a bad state on my machine. I have since then decided that I will never add any of my online accounts, Facebook, Google, or anything to Gnome.

If x86 just officially said “we’re cutting off 32-bit legacy” one day (similar to how Apple did), they could toss out 95% of the crap that makes them power inefficient. Just think of the difference dropping A10 offered for memory efficiency.

“Modern Standby” could be made to actually work, ACPI states could be fixed, a functional wake-up state built anew, etc. Hell, while it would allow pared down CPUs, you could have a stop-gap where run mode was customized in firmware.

Too much credit is given to Apple for “owning the stack” and too little attention to legacy x86 cruft that allows you to run classic Doom and Commander Keen on modern machines.

>If x86 just officially said “we’re cutting off 32-bit legacy” one day (similar to how Apple did), they could toss out 95% of the crap that makes them power inefficient.

Where do you get this from? I could understand that they could get rid of the die area devoted to x86 decoding, but as I understand it x86 and x86-64 instructions get interpreted by the same execution units, which are bitness blind. What makes you think it's x86 support that's responsible for the vast majority of power inefficiency in x86-64 processors?

Intel has proposed APX to address this. It does away with some of the 32-bit garbage that complicates design for no good payoff. Most importantly, it increases from 16 to 32 registers and allows 3-register instructions (almost all x86 instructions are 1-register or 2-register instructions). This would strip out tons of MOV instructions which was proven with AMD64 to have a decent impact on performance.

Reduced I-Cache, uop cache, and decoder pressure would also have a beneficial impact. On the flip side, APX instructions would all be an entire byte longer than their AMD64 counterparts, so some of the benefits would be more muted than they might first appear and optimizing between 16 registers and shorter instructions vs 32 registers with longer instructions is yet another tradeoff for compilers to make (and takes another step down the path of being completely unoptimizable by humans).

>This would strip out tons of MOV instructions which was proven with AMD64 to have a decent impact on performance.

Sure, but the topic is optimizing power efficiency by removing support for an instruction set. That aside, if an instruction isn't very performant, it isn't much of an issue per se. It just means it won't get used much and so chip design resources will be suboptimally allocated. That's a problem for Intel and AMD, and for nobody else.

From what I understood. It's not "32-bit instructions" that are the problem. It's a load of crap associated with those 32-bit processors. There's more to x86 than just the instruction set. Operating systems need to carry the baggage in x86 if they want to allow users to run on old and new processors.

Before addressing anything else, "software is complicated by having to support legacy stuff" is not a valid argument for removing that support at the hardware level. If a software developer wishes to design their software without that legacy support, that's their prerogative.

>Operating systems need to carry the baggage in x86 if they want to allow users to run on old and new processors.

What do you mean by this exactly? Are you talking about hybrid execution like WOW64, or simple multi-platform support like the Linux kernel?

WOW64 is irrelevant as far as power efficiency is concerned if the user doesn't run any x86 software. If the user is running x86 software, that's a reason not to remove that support.

Multi-platform support shouldn't have an effect on power efficiency, beyond complicating the design of the system. Saying that the Linux kernel should stop supporting x86 so x86-64 can be more power-efficient is like saying that it should stop supporting... whatever, PowerPC, for that same reason. It's a non sequitur.

Removing 32 bit hardware support frees up die space and it frees up storage space and RAM since 32 bit and 64 bit libraries had to be on disk and in memory.

They don't use memory if they're not used, but you do save storage. Neither one has any effect on power efficiency, though. None of these savings require the hardware to lose useful features. Microsoft could at any time decide to drop WOW64.

Saving die space also has no effect on power efficiency, beyond reducing the total transistor count. I'd be very surprised the x86-specific decoding logic makes up a significant area of your typical die. Maybe you'd make the processor 3% more efficient? Something like that?

> “Modern Standby” could be made to actually work, ACPI states could be fixed, a functional wake-up state built anew, etc. Hell, while it would allow pared down CPUs, you could have a stop-gap where run mode was customized in firmware.

I'm confused, how is any of this related to "x86" and not the diverse array of third party hardware and software built with varying degrees of competence?

It's a shame they are so bad at upstreaming stuff, and run on older kernels (which in turn makes upstreaming harder).

[deleted]

> It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding.

To be fair, usually the linux itself has hardware acceleration available but the browser vendors tend to disable gpu rendering except on controlled/known perfectly working combinations of OS/Hardware/Drivers and they have much less testing in Linux. In most case you can force enabling gpu rendering in about:config and try it out yourself and leave it unless you get recurring crashes.

The only browser I’ve ever had issues with enabling video acceleration on Linux is Firefox.

All the Blink-based ones just work as long the proper libraries are installed and said libraries properly detect hardware support.

I run Fedora and for legal reasons, they ship a version that has this problem. Have you tried Mozilla's Flatpak build? I use it instead and it resolves all my problem.

When I enabled HW acceleration on my Linux laptop to see how much it would improve battery life in Linux, my automated test (which is basically just browsing Reddit) would start crashing every 20 minutes or so.

I once saw a high resolution CPU graph of a video playing in Safari. It was completely dead except for a blip every 1/30th of a second.

Incredible discipline. The Chrome graph in comparison was a mess.

Safari team explicitly targets perf as a target. I just wish they weren't so bad about extensions and adblock and I'd use it as my daily driver. But those paper cuts make me go back to chromium browsers all the time.

I find Orion has similar power efficiency but avoids those papercuts: https://kagi.com/orion/

I disable turbo boost in cpu on linux. Fans rarely start on the laptop and the system is generally cool. Even working on development and compilation I rarely need the extra perf. For my 10yr old laptop I cap max clock to 95% too to stop the fans from always starting. YMMV

Hell, Apple CPU's are even optimized for Apple software GC calls like Retain/Release objects. It seems if you want optimal performance and power efficiency, you need to own both hardware and software.

Looks like general purpose CPUs are on the losing train.

Maybe Intel should invent desktop+mobile OS and design bespoke chips for those.

> Apple CPU's are even optimized for Apple software GC calls like Retain/Release objects.

I assume this is referring to the tweet from the launch of the M1 showing off that retaining and releasing an NSObject is like 3x faster. That's more of a general case of the ARM ISA being a better fit for modern software than x86, not some specific optimization for Apple's software.

x86 was designed long before desktops had multi-core processors and out-of-order execution, so for backwards compatibility reasons the architecture severely restricts how the processor is allowed to reorder memory operations. ARM was designed later, and requires software to explicitly request synchronization of memory operations where it's needed, which is much more performant and a closer match for the expectations of modern software, particularly post-C/C++11 (which have a weak memory model at the language level).

Reference counting operations are simple atomic increments and decrements, and when your software uses these operations heavily (like Apple's does), it can benefit significantly from running on hardware with a weak memory model.

> I assume this is referring to the tweet from the launch of the M1 showing off that retaining and releasing an NSObject is like 3x faster. That's more of a general case of the ARM ISA being a better fit for modern software than x86, not some specific optimization for Apple's software.

It's not really even the ISA, mainly the implementation. Atomics on Apple cores are 3x faster than Intel (18 cycles back to back latency vs 6). AMD's atomics have 6 cycle latency.

  It seems if you want optimal performance and power efficiency, you need to own both hardware and software.
Does Apple optimize the OS for its chips and vice versa? Yes. However, Apple Silicon hardware is just that good and that far ahead of x86.

Here's an M4 Max running macOS running Parallels running Windows when compared to the fastest AMD laptop chip: https://browser.geekbench.com/v6/cpu/compare/13494385?baseli...

M4 Max is still faster even with 14 out of 16 possible cores being used. You can't chalk that up to optimizations anymore because Windows has no Apple Silicon optimizations.

Not really sure whether it makes a difference, but the Parallel VM is running Windows Pro, while the Windows OS on ASUS Gaming Laptop is running Windows Home.

> Maybe Intel should invent desktop+mobile OS and design bespoke chips for those.

Wouldn't it be easier for Intel to heavily modify the linus kernel instead of writing their own stack?

They could even go as far as writing the sleep utilities for laptops, or even their own window manager to take advantage of the specific mods in the ISA?

Intel was working with Nokia to heavily invest into Meego OS until it was killed by Elop+Microsoft.

If it hadn't been killed, it may have become something interesting today.

they /did/ this but notice the "was" at the top of the page: https://www.clearlinux.org/

> Maybe Intel should invent desktop+mobile OS and design bespoke chips for those.

Or, contribute efficiency updates to popular open projects like firefox, chromium, etc...

[flagged]

> most of which comes down to using the CPU as little as possible.

it least on mobile platform apple advocate the other way with race to sleep - do calculation as fast as you can with powerful cores so that whole chip can go back to sleep earlier and more often take naps.

Intel stipulated the same under the name HUGI (Hurry Up and Go Idle) about 15 years ago when ultrabooks were the new hot thing.

But when Apple says it, software devs actually listen.

Peer pressure. When everybody else does it and you don't, your app sticks out like a sore thumb and makes users unhappy.

The other aspect of it is that paid software is more prevalent in macOS land, and the prices are generally higher than on Windows. But the flip side of that is that user feedback is taken more seriously.

And then Microsoft adds an animated news tracker to the left corner of the start bar, making sure the cpu never gets to idle.

Which also should mean that using that M1 machine with Linux will have Intel/AMD like experience, not the M1 with macOS experience.

Turning down the settings will get you worse experiece, especially if you turn down that they are "mostly idle". Not comparable.

Sounds like death by (2^10)-24) cuts for the x86 architecture.

I honestly don't see myself ever leaving Macbooks at this point. It's the whole package: the battery life is insane, I've literally never had a dead laptop when I needed it no matter what I'm doing or where I'm at; it runs circles around every other computer I own, save for my beastly gaming PC; the stability and consistency of MacOS, and the underlying unix arch for a lot of tooling, all the way down to the build quality being damn near flawless save for the annoying lack of ports (though increasingly, I find myself needing ports less and less).

Like, would I prefer an older-style Macbook overall, with an integrated card reader, HDMI port, ethernet jack, all that? Yeah, sure. But to get that now I have to go to a PC laptop and there's so many compromises there. The battery life isn't even in the same zip code as a Mac, they're much heavier, the chips run hot even just doing web browsing let alone any actual work, and they CREAK. Like my god I don't remember the last time I had a Windows laptop open and it wasn't making all manner of creaks and groans and squeaks.

The last one would be solved I guess if you went for something super high end, or at least I hope it would be, but I dunno if I'm dropping $3k+ either way I'd just as soon stay with the Macbook.

> Like, would I prefer an older-style Macbook overall, with an integrated card reader, HDMI port, ethernet jack, all that? Yeah, sure.

Modern MacBook pros have 2/3 (card reader and HDMI port), and they brought back my beloved MagSafe charging.

I was all for MagSafe, but after buying an M2, I realized that the USB-C charging was better. I found the cables came out almost as well as the MagSafe if I stepped on them, but you can plug them in to either side. I seem to always be on the wrong side, so the MagSafe cable has to snake around to the other side.

No shit! I'm still rocking the M1 Pro for personal and the M2 Air for work so I do have magsafe back for one of them at least, but just USB-C besides that.

But yeah IMHO there's just no comparison. Unless you're one of those folks who simply cannot fucking stand Mac, it's just no contest.

Even the high end ones (Razers, Asus, Surface Books, Lenovos) are mere lookalikes and don't run anywhere as well as the MacBooks. They're hot and heavy and loud and full of driver issues and discrete graphics switching headaches and of course the endless ads and AI spam of modern Windows. No comparison at all...