> That will help save enormous amounts of power: up to 48 percent on a single charge,

Why does refresh rate have such a large impact on power consumption? I understand that the control electronics are 60x more active at 60 Hz than 1 Hz, but shouldn't the light emission itself be the dominant source of power consumption by far?

I used to be a display architect about 15 years back (for Qualcomm mirasol, et al), so my knowledge of the specifics / numbers is outdated. Sharing what I know.

High pixel density displays have disproportionately higher display refresh power (not just proportional to the total number of pixels as the column lines capacitances need to be driven again for writing each row of pixels). This was an important concern as high pixel densities were coming along.

Display needs fast refreshing not just because pixel would lose charge, but because the a refresh can be visible or result in flicker. Some pixels tech require flipping polarity on each refresh but the curves are not exactly symmetric between polarities, and further, this can vary across the panel. A fast enough refresh hides the mismatch.

Since you are knowledgable about this, do you have any idea what happened to Mirasol technology? I was fascinated by those colour e-paper like displays, and disappointed when plans to manufacture it was shelved. Then I learnt Apple purchased it but it looks more like a patent padding purchase than for tech development as nothing has come out of it form Apple too. Is it in some way still being developed or parts of its research tech being used in display development?

Being a key technology architect for it (not the core inventor), I know all about it, and then some more!

I cannot however talk publicly about it. :-(

It has been a disappointment for me as well. I had worked on it for nearly eight years. The idea was so interesting--using thin-film interference for creating images is akin to shaping Newton's rings into arbitrary images, something which even Newton would not have imagined! The demos and comparisons we had shown to various industry leaders and sometimes publicly were often instantly compelling. The people/engineers in the team were mostly the best I have ever worked with, and with whom I still maintain a great connection. But unfortunately, there were problems (not saying how much tech how much people) that were recognized by some but never got (timely) addressed. And a tech like it does not exist till date.

I do not think anything on it is being developed further.

The earliest of the patents would have expired by now.

Liquavista, Pixtronics, etc., have been alternative display technologies that also ultimately didn't make the impact desired, AFAIK.

Meanwhile, LCDs developed high pixel densities (which led to pressures on mirasol tech too), Plasma got sidelined. EInk displays have since then made good progress, though, in my opinion, are still far from colors and speeds that mirasol had. And of course, OLED, Quantum dots, ...

My fantasy display would be some kind of reflective-mode display that can passively show static images like e-ink, have partial updates like MIP LCD in wearables, response times like modern LCD and AMOLED, and "super-real" contrast/gain.

I.e. actually do wavelength conversion to not just reflect a narrow-pass filtered version of the ambient light, but convert that broad spectrum energy into the desired visuals, so it isn't always inherently dimmer than the environment. I can only imagine this being either:

1. some wild materials science stuff that manages interference

2. some wild materials science stuff that controls multi-photon fluorescence

3. some wild materials science stuff to fuse photoelectric and electroemissive functions in the same panel. i.e. not really passive but extremely low loss active system to double-convert the ambient light that can follow the power curve of available light

>> My fantasy display would be some kind of reflective-mode display that can passively show static images like e-ink, have partial updates like MIP LCD in wearables, response times like modern LCD and AMOLED, and "super-real" contrast/gain.

What about cost? :-) It is an important factor too outside of the fantasy world and can kill new display technologies. The latter often suffer from yield issues (dead pixels, etc.) during early phases of R&D which can make initial costs be still higher as compared to already matured technologies.

>> I.e. actually do wavelength conversion to not just reflect a narrow-pass filtered version of the ambient light, but convert that broad spectrum energy into the desired visuals

Reflecting filtered version of the ambient light, if done efficiently, brings the display to as bright as other natural/common objects around. So it should be good enough for most purposes, even in a somewhat darker ambient with eyes adjusted.

It would not however be attention-grabbing by being brighter than those surrounding objects. So many users, often used to seeing brighter emissive displays, still do not pick those as a preference.

>> I can only imagine this being either:

>> ...

Another way to make it look brighter is to reflect more light towards the users/eyes while capturing it from broader directions. This would compromise on viewing angle (unless more fantasy tech is brought in), but I think this in itself take the display to wow levels.

Well, the reflectivity of color MIP LCD is not very satisfactory. It is barely adequate, even for people like me who are fans. This is both because of the narrow-band RGB filtering and the inherent losses of the polarization-based switching method. Even the "white" state is discarding most polarizations of the ambient light, and then the darker colors are even blocking that.

My fantasy is having the reflectivity be at least as good as good white paper, and with deep contrast too.

It also needs to be brighter in practice than normal objects because, no matter what, it will have to overcome some glare from whatever protective glass and touch sensing layers there are over the actual display.

>> Well, the reflectivity of color MIP LCD is not very satisfactory. It is barely adequate, even for people like me who are fans. This is both because of the narrow-band RGB filtering and the inherent losses of the polarization-based switching method. Even the "white" state is discarding most polarizations of the ambient light, and then the darker colors are even blocking that.

Yes, that's right. A typical color LCD transmits only about 5-10% of the light for white because of all those factors.

>> My fantasy is having the reflectivity be at least as good as good white paper, and with deep contrast too.

That exactly was our benchmark for mirasol development. We used to measure best-in-class color prints for color gamut, brightness, contrast, etc.

mirasol did not use polarizers or RGB filters. An advanced architecture (that I was leading) also avoided RGB subpixels, something which very few alternative technologies can do [1].

>> It also needs to be brighter in practice than normal objects because, no matter what, it will have to overcome some glare from whatever protective glass and touch sensing layers there are over the actual display.

Yes.

Integrated touch-sensing helps significantly though.

There are also optical means that can nearly get rid of glare, if cost were not an issue. I have seen demo coatings that make the glass practically disappear -- we would repeatedly walk into it if it were used on a glass door.

-------

[1] Liquavista had Cyan-Magenta-Yellow subpixels vertically stacked. A new Eink architecture uses multiple colored pigments within the same cell but now needs sophisticated mechanisms to control them independently.

[deleted]

What's interesting about these newer 1Hz claims is that they're basically trying to sidestep the exact problems you mention

Correct.

I myself have been privy to similar R&D going on for more than a decade.

> the column lines capacitances need to be driven again for writing each row of pixels

Not my field so please forgive a possibly obvious question: That seems true regardless of the pixel count (?), so for that process why wouldn't power also be proportional to the pixel count?

I notice I'm saying 'pixel count' and you are saying 'pixel density'; does it have something to do with their proximity to each other?

Total column line capacitance is impacted by the number of pixels hanging onto it as each transistor (going to the pixel capacitance) adds some parasitic capacitance of its own. Hope that answers your question. You are right in the sense that a part of the total column capacitance would depend on just the length and width of it, irrespective of the number of pixels hanging onto it.

I had back then developed what was perhaps the most sophisticated system-level model for display power, including refresh, illumination, etc., and it included all those terms for capacitance, a simplified transistor model, pixel model, etc.

I did not carefully distinguish pixel density vs. pixel count while writing my previous comments here, just to keep it simple. You can perhaps imagine that increasing display size without changing pixel count can lead to higher active pixel area percentage, which in turn would lead to better light generation/transmission/reflection efficiency. There are multiple initially counter-intuitive couplings like that. So it ultimately comes down to mathematical modeling, and the scaling laws / derivatives depend on the actual numbers chosen.

Addition:

Another important point -- Column line capacitances do not necessarily need full refresh going from one row of the pixels to the next, as the image would typically have vertical correlations. Not mentioning this is another simplification I made in my previous comments. My detailed power model included this as well -- so it could calculate energy spent for writing a specific image, a random image, a statistically typical image, etc.

Hmm, are you saying that LCD (without memory-in-pixel) could have enough persistence to hold a (well disciplined) image without a constant, high-frequency driver? I was under the impression that the partial crystal alignments needed for modern color gamuts require constant, dynamic control.

Am I mistaken? Is it feasible that there could be (analog?) charge memory to hold each sub-pixel at a stable partial alignment, without the high-frequency driver signals being reasserted?

I have understood the reason MIP LCD works is that there is a RAM bit embedded in each sub-pixel, so it can locally maintain a static, binary charge state without dynamic refresh. There is no high-frequency oscillating circuit to provide this persistence. The only way I could see this work for increased color depths would be if there were a recursive hierarchy of sub-pixels, each with that 1-bit state. E.g. a series of 1/2, 1/4, 1/8, ... area sub-pixels could encode a linear color space, with all the emission areas adding together to physically embody the DAC.

>> Hmm, are you saying that LCD (without memory-in-pixel) could have enough persistence to hold a (well disciplined) image without a constant, high-frequency driver? I was under the impression that the partial crystal alignments needed for modern color gamuts require constant, dynamic control. Am I mistaken? Is it feasible that there could be (analog?) charge memory to hold each sub-pixel at a stable partial alignment, without the high-frequency driver signals being reasserted?

Short answer: Yes.

Active matrix panels use transistors as switches, typically one transistor per (sub-)pixel. Only a single row of (sub-)pixels is addressed at a time, i.e., the switches are 'on' (conducting) only for one row during the refresh cycle. The pixels on the rest of the panel maintain a floating charge as the switches are in off state. The charge is held, except for leakage currents (more on that later). All this is just like DRAM.

You may think then the voltage on these disconnected pixels would also be near-constant during this disconnected phase. However, the LC (Liquid Crystal) is usually mechanically slower to react, keeps adjusting to the charge placed during the ON phase, and its capacitance changes as LC adjusts. So the voltage changes somewhat.

For OLEDs, it needs constant currents. So AFAIK, the charge is held on the input of an additional transistor, which turns it into a current through the diode.

Often a static capacitor is explicitly added to each pixel to (a) counter the leakage current, and (b) hold the voltage better for LC where the capacitance changes.

The time available to write a single row is frame time (or field time) divided by the number of rows. That is often very small (e.g., 16 ms / 1000 rows = 16 us) as compared to the LC response times (say > 1 ms). Since the LC pixel cap is not constant, the value written within the short ON time changes, and gets corrected only when a new field/frame is written. This implies motion artifacts even with 1 ms LC response time, since the next field/frame may come only after say 16 ms (1/60 Hz). A smarter drive scheme could anticipate the capacitance change and supply a pre-adjusted voltage to compensate.

Now for the charge leakage: Leakage current pathways are usually not from the LC, but transistor itself! Leakage currents in (sub-)pA range are normal. And this is where oxide transistors come in. E.g., IGZO. The leakage current is next to zero.

So the device will hold the charge for much longer. It may even be more than a second, however, polarity-reversal requirement may be faster (I am not sure).

In one experiment a colleague performed, a mirasol passive matrix display was disconnected altogether from the side electronics, and it held the image intact for days. No transistor in a passive matrix display and practically no leakage!

>> I have understood the reason MIP LCD works is that there is a RAM bit embedded in each sub-pixel, so it can locally maintain a static, binary charge state without dynamic refresh.

Yes. The memory in pixel is like going from DRAM to SRAM. No (external) refresh needed anymore as the RAM cell stays connected to the power supply and easily counters leakage currents (including its own transistors).

The cost, some of the pixel area may be lost for the circuitry. May be some loss of yield because of more complex circuitry.

Another cost, as you wrote, it's binary now. (Assuming you can't afford to include more bits and a DAC in every pixel)

>> There is no high-frequency oscillating circuit to provide this persistence.

There's no 'oscillatory' stuff needed. The persistence issue is just because of charge loss from leakage. So you need to bring the same voltage again (unless the image changes).

>> The only way I could see this work for increased color depths would be if there were a recursive hierarchy of sub-pixels, each with that 1-bit state. E.g. a series of 1/2, 1/4, 1/8, ... area sub-pixels could encode a linear color space, with all the emission areas adding together to physically embody the DAC.

Yes. And this isn't just science fiction, as in, this has been done.

It need not be just space-wise though. It could also be time-division with such ratioed intervals. Or a combination of space and time. E.g., two subpixels, and then two temporal fields (I call them bit-planes) yielding four bits.

Again, these things are actually done. DLP projectors use temporal fields.

Hope this helps.

Thanks, it was very instructive.

I know of DLP and I know of temporal dithering, which I lump into the "oscillatory stuff" which I assume has significant power consumption compared to the static scenarios like MIP LCD. I think I also conflate any dynamic refresh process into this same category, though I guess that may be too broad a brush...

When I was thinking about the sub-areas to implement an optical DAC, I was thinking about this in the low power realm of a self-sustaining MIP LCD without display refresh, but with more bit depth.

What I didn't fully appreciate is the nice analogy of regular active matrix LCD to DRAM. I did understand that MIP LCD sounds like embedded SRAM.

The difference with active-matrix that it is analog, right? I.e. the DAC is in the part of the display driver that is generating a pixel serialized signal that is distributed out to the panel lines and columns? So the different sub-pixel levels are analog voltages applied during this refresh, and the dynamic "memory" is some combination of the floating transistor input and the intrinsic physical hysteresis of the liquid crystal cell. (By contrast, MIP is actually holding a digital value at the sub-pixel.)

>> When I was thinking about the sub-areas to implement an optical DAC, I was thinking about this in the low power realm of a self-sustaining MIP LCD without display refresh, but with more bit depth.

Yes, this is correct. You can call it an optical DAC, a term I otherwise never heard before. :-) The summation happens in the eyes because of spatial/temporal resolution limits.

>> The difference with active-matrix that it is analog, right? I.e. the DAC is in the part of the display driver that is generating a pixel serialized signal that is distributed out to the panel lines and columns? So the different sub-pixel levels are analog voltages applied during this refresh, and the dynamic "memory" is some combination of the floating transistor input and the intrinsic physical hysteresis of the liquid crystal cell. (By contrast, MIP is actually holding a digital value at the sub-pixel.)

Yes.

Without digital memory in pixel, the DAC(s) are outside the pixel array. Could be common across the entire panel (would need very high speed then), one per column, etc.

>> What I didn't fully appreciate is the nice analogy of regular active matrix LCD to DRAM.

Guess what, the said "DRAM" can be read as well, not just written to! I have previously (nearly two decades back) designed sophisticated circuits for display / pixel calibration using this. To be clear, the purpose was not to use a display panel as memory, and nor was I able to use such methods for display-integrated touch-sensing*. My core purpose was pixel characterization, global auto-configuration of the controller electronics based on measurements of electrical-to-optical transfer curves, panel uniformity calibration, dead pixel detection, etc. In one of the projects, I was writing specific data to the display panel, but doing that and erasing it so fast that (even expert) human eyes could not see. :-)

* There likely have been advancements for this since then.

Thanks. It's always interesting what the actual issues and engineering look like.

There's definitely a few reasons but one of them is that you have to ask the GPU to do ~60x less work when you render 60x less frames

PSR (panel self-refresh) lets you send a single frame from software and tell the display to keep using that.

You don’t need to render 60 times the same frame in software just to keep that visible on screen.

How often is that used? Is there a way to check?

With the amount of bullshit animations all OSes come with these days, enabled by default, and most applications being webapp with their own secondary layer of animations, and with the typical developer's near-zero familiarity with how floating point numbers behave, I imagine there's nearly always some animation somewhere, almost but not quite eased to a stop, that's making subtle color changes across some chunk of the screen - not enough to notice, enough to change some pixel values several times per second.

I wonder what existing mitigations are at play to prevent redisplay churn? It probably wouldn't matter on Windows today, but will matter with those low-refresh-rate screens.

Android has a debug tool that flashes colors when any composed layer changes. It's probably an easy optimization for them to not re-render when nothing changes.

I never thought about it but you've made me realise that a lot of people in our industry have been so enthusiastically working on random "creative" things that at best no one even asked for and it turns out to hurt the end users in ways no one even knows.

I used to be a front end dev and I always hated that animation was coded per element. There should be just a global graphics API that does all the morphing and magic moves that user can turn off on the OS.

Normally, your posts are very coherent, but this one flies on the rails. (Half joking: Did someone hack your account!?) I don't understand your rant here:

    > With the amount of bullshit animations all OSes come with these days, enabled by default, and most applications being webapp with their own secondary layer of animations, and with the typical developer's near-zero familiarity with how floating point numbers behave
I use KDE/GNU/Linux, and I don't see a lot of unnecessary animations. Even at work where I use Win11, it seems fine. "[M]ost applications being webapp": This is a pretty wild claim. Again, I don't think any apps that I use on Linux are webapps, and most at work (on Win11) are not.

Seriously? What is _this_ comment? TeMPOraL makes perfect sense.

LLMs learned that users have post histories? /s

Why? Surely copying the same pixels out sixty times doesn't take that much power?

The PCWorld story is trash and completely omits the key point of the new display technology, which is right in the name: "Oxide." LG has a new low-leakage thin-film transistor[1] for the display backplane.

Simply, this means each pixel can hold its state longer between refreshes. So, the panel can safely drop its refresh rate to 1Hz on static content without losing the image.

Yes, even "copying the same pixels" costs substantial power. There are millions of pixels with many bits each. The frame buffer has to be clocked, data latched onto buses, SERDES'ed over high-speed links to the panel drivers, and used to drive the pixels, all while making heat fighting reactance and resistance of various conductors. Dropping the entire chain to 1Hz is meaningful power savings.

[1] https://news.lgdisplay.com/en/2026/03/lg-display-becomes-wor...

So it's a Sharp MIP scaled up? https://sharpdevices.com/memory-lcd/

Sharp MIP makes every pixel an SRAM bit: near-zero current and no refresh necessary. The full color moral equivalent of Sharp MIP would be 3 DACs per pixel. TFT (à la LG Oxide) is closer to DRAM, except the charge level isn't just high/low.

So, no, there is a meaningful difference in the nature of the circuits.

Thanks. Great explanation.

Copying , Draw() is called 60 times a second .

It isn't for any reasonable UI stack. For instance, the xdamage X11 extension for this was released over 20 years ago. I doubt it was the first.

Xdamage isn’t a thing if you’re using a compositor for what it’s worth. It’s more expensive to try to incrementally render than to just render the entire scene (for a GPU anyway).

And regardless, the HW path still involves copying the entire frame buffer - it’s literally in the name.

Thats not true. I wrote a compositor based on xcompmgr, and there damage was widely used. It's true that it's basically pointless to do damage tracking for the final pass on gl, but damage was still useful to figure out which windows required new blurs and updated glows.

At the software level yes, but it seems nobody has taken the time to do this at the hardware level as well. This is LG's stab at it.

Apple has been doing this since they started having 'always-on' displays.

So has Samsung, but we're talking mobile devices with OLED displays, which is an entirely different universe both hardware and software-wise.

What’s your metal model of what happens when a dirty region is updated and now we need to get that buffer on the display?

It was, but xdamage is part of the composting side of the final bitmap image generation, before that final bitmap is clocked out to the display.

The frame buffer, at least the portion of the GPU responsible for reading the frame buffer and shipping the contents out over the port to the display, the communications cable to the display screen itself, and the display screen were still reading, transmitting, and refreshing every pixel of the display at 60hz (or more).

This LG display tech. claims to be able to turn that last portion's speed down to a 1Hz rate from whatever it usually is running at.

You forget that all modern UI toolkits brag about who has the highest frame rate, instead of updating only what's changed and only when it changes.

[deleted]

I think the idea is that in an always-on display mode, most of the screen is black and the rest is dim, so circuitry power budget becomes a much larger fraction of overhead.

Ohh like property tax on a vacant building

I interpreted that bit as E2E system uptime being up by 48%. Sounds more plausible to me, as there'd fewer video frames that would need to be produced and pushed out.

This is an OLED display, so I don't think the control electronics are actually any less active. (They would be for LCD, which is where most of these low-refresh-rate optimizations make sense.)

The connection between the GPU and the display has been run length encoded (or better) since forever, since that reduces the amount of energy used to send the next frame to the display controller. Maybe by "1Hz" they mean they also only send diffs between frames? That'd be a bigger win than "1Hz" for most use cases.

But, to answer your question, the light emission and computation of the frames (which can be skipped for idle screen regions, regardless of frame rate) should dwarf the transmission cost of sending the frame from the GPU to the panel.

The more I think about this, the less sense it makes. (The next step in my analysis would involve computing the wattage requirements of the CPU, GPU and light emission, then comparing that to the KWh of the laptop battery + advertised battery life.

Not OLED.

> LG Display is also preparing to begin mass production of a 1Hz OLED panel incorporating the same technology in 2027.

> This is an OLED display

The LG press release states that it's LCD/TFT.

https://news.lgdisplay.com/en/2026/03/lg-display-becomes-wor...

> The more I think about this, the less sense it makes

And yet, it’s the fundamental technology enabling always on phone and smartwatch displays

The intent of this is to reduce the time that the CPU, GPU, and display controller is in an active state (as well as small reductions in power of components in between those stages).

for small screen sizes and low information density displays, like a watch that updates every second this makes a lot of sense

it would make a lot of sense in situations where the average light generating energy is substantially smaller:

pretend you are a single pixel on a screen (laptop, TV) which emits photons in a large cone of steradians, of which a viewer's pupil makes up a tiny pencil ray; 99.99% of the light just misses an observer's pupils. in this case this technology seems to offer few benefits, since the energy consumed by the link (generating a clock and transmitting data over wires) is dwarfed by the energy consumed in generating all this light (which mostly misses human eye pupils)!

Now consider smart glasses / HUD's; the display designer knows the approximate position of the viewer's eyes. The optical train can be designed so that a significantly larger fraction of generated photons arrive on the retina. Indeed XReal or NReal's line of smart glasses consume about 0.5 W! In such a scenario the links energy consumption becomes a sizable proportion of the energy consumption; hence having a low energy state that still presents content but updates less frequently makes sense.

One would have expected smart glasses to already outcompete smartphones and laptops, just by prolonged battery life, or conversely, splitting the difference in energy saved, one could keep half of the energy saved (doubling battery life) while allocating the other half of the energy for more intensive calculations (GPU, CPU etc.).

You may be over estimating how much power a modern laptop backlight consumes and may be under estimating how much power a modern CPU/GPU consumes at idle. They are both significant factors.

On your computer reading this, you're probably not moving the screen. That is time that the GPU can sleep. That is a lot of power saved, regardless of display.

So yes, "generating all this light" takes up a lot of power. But just because it does, doesn't mean that overall battery life wouldn't be benefited from improvements elsewhere.

And in fact, the average backlight is about 5w, maybe 10w. The average laptop GPU when idle and awake, consumes about 5w as well. If you can get that idle GPU consumption down substantially, that's a potential 20% (or more!) improvement in battery life.

Your GPU rendering 1 frame vs your GPU rendering 60 frames.

In cases where 1hz mode is feasable the gpu doesn't render 60 fps anyways

Really disappointing to only learn this after a decade, but on Linux changing from 60hz to 40hz decreased my power draw by 40% in the last hour since reading this comment.

[dead]

[dead]

Before OLED (and similar), most displays were lit with LEDs (behind or around the screen, through a diffuser, then through liquid crystals) which was indeed the dominant power draw... like 90% or so!

But the article is about an OLED display, so the pixels themselves are emitting light.

> But the article is about an OLED display

The article is about an LCD display, actually.

I just wish "we" wouldn't have discarded the option to use pure black for dark modes in favor of a seemingly ever-brightening blue-grey...

It doesn't. They take extreme use cases such as watching video until the battery depletes at maximum brightness where 90% of power consumption is the display. But in realistic use cases the fraction of power draw consumed by the display is much smaller when the CPU is actually doing things.

For whatever reason I keep catching my macbook on max brightness. Maybe not an unrealistic test.