Rust GUI is in a tough spot right now with critical dependencies under-staffed and lots of projects half implemented. I think the advent of LLMs has been timed perfectly to set the ecosystem back for a few more years. I wrote about it, and how it affected our development yesterday: https://tritium.legal/blog/desktop
Interesting read, however as someone from the same age group as Casey Muratori, this does not make much sense.
> The "immediate mode" GUI was conceived by Casey Muratori in a talk over 20 years ago.
Maybe he might have made it known to people not old enough to have lived through the old days, however this is how we used to program GUIs in 8 and 16 bit home computers, and has always been a thing in game consoles.
I think this is the source of the confusion:
> To describe it, I coined the term “Single-path Immediate Mode Graphical User Interface,” borrowing the “immediate mode” term from graphics programming to illustrate the difference in API design from traditional GUI toolkits.
— https://caseymuratori.com/blog_0001
Obviously it’s ludicrous to attribute “immediate mode” to him. As you say, it’s literally decades older than that. But it seems like he used immediate mode to build a GUI library and now everybody seems to think he invented immediate mode?
Is Win16 / Win32 GDI which goes back to 1985 an immediate mode GUI?
Win32 GUI common controls are a pretty thin layer over GDI and you can always take over WM_PAINT and do whatever you like.
If you make your own control you musts handle WM_PAINT which seems pretty immediate to me.
https://learn.microsoft.com/en-us/windows/win32/learnwin32/y...
Difference between game engine and say GDI is just the window buffer invalidation, WM_PAINT is not called for every frame, only when windows thinks the windows rectangle has changed and needs to be redrawn independently of screen refresh rate.
I guess I think of retained vs immediate in the graphic library / driver because that allows for the GPU to take over more and store the objects in VRAM and redraw them. At the GUI level thats just user space abstractions over the rendering engine, but the line is blurry.
No, that is event based programming, and also the basis of retained rendering, because you already have the controls that you compose, or subclass.
Handling WM_PAINT is no different from something like OnPaint() on a base class.
This was actually one of mindset shifts when moving from MS-DOS into Windows graphics programming.
Event based or loop based is separate from retained or immediate.
The canvas api in the browser is immediate mode driven by events such as requestAnimationFrame
If you do not draw in WM_PAINT it will not redraw any state on its own within your control.
GDI is most certainly an immediate mode API and if you have been around long enough for DOS you would remember how to use WM_PAINT to write a game loop renderer before Direct2D in windows. Remember BitBlt for off screen rendering with GDI in WM_PAINT?
https://learn.microsoft.com/en-us/windows/win32/direct2d/com...
It's like the common claim that data-oriented programming came out of game development. It's ahistorical, but a common belief. People can't see past their heroes (Casey Muratori, Jonathon Blow) or the past decade or two of work.
I partly agree, but I think you're overcorrecting. Game developers didn't invent data-oriented design or performance-first thinking. But there's a reason the loudest voices advocating for them in the 2020s come from games: we work in one of the few domains where you literally cannot ship if you ignore cache lines and data layout. Our users notice a 5ms frame hitch- While web developers can add another React wrapper and still ship.
Computing left game development behind. Whilst the rest of the industry built shared abstractions, we worked in isolation with closed tooling. We stayed close to the metal because there was nothing else.
When Casey and Jon advocate for these principles, they're reintroducing ideas the broader industry genuinely forgot, because for two decades those ideas weren't economically necessary elsewhere. We didn't preserve sacred knowledge. We just never had the luxury of forgetting performance mattered, whilst the rest of computing spent 20 years learning it didn't.
> I think you're overcorrecting.
I don't understand this part of your comment, it seems like you're replying to some other comment or something not in my comment. How am I overcorrecting? A statement of fact, that game developers didn't invent these things even though that's a common belief, is not an overcorrection. It's just a correction.
Ah, I read your comment as "game devs get too much credit for this stuff and people are glorifying Casey and Jon" and ran with that, but you were just correcting the historical record.
My bad. I think we're aligned on the history; I was making a point about why they're prominent advocates today (and why people are attributing invention to them) even though they didn't invent the concepts.
I don't really like this line of discourse because few domains are as ignorant of computing advances as game development. Which makes sense, they have real deadlines and different goals. But I often roll my eyes at some of the conference talks and twitter flame wars that come from game devs, because the rest of computing has more money resting on performance than most game companies will ever make in sales. Not to mention, we have to design things that don't crash.
It seems like much of the shade is tossed at web front end like it's the only other domain of computing besides game end.
I mean... fair point? I'm not claiming games are uniquely performance-critical.
You're right that HFT, large-scale backend, and real-time systems care deeply about performance, often with far more money at stake.
But those domains are rare. The vast majority of software development today can genuinely throw hardware or money at problems (even HFT and large backend systems). Backends are usually designed to scale horizontally, data science rents bigger GPUs, embedded gets more powerful SoCs every year. Most developers never have to think about cache lines because their users have fast machines and tolerant expectations.
Games are one of the few consumer-facing domains that can't do this. We can't mandate hardware (and attempts at doing so cost sales and attract community disgust), we can't hide latency behind async, and our users immediately notice a 5ms hitch. That creates different pressures- we're optimising for the worst case on hardware we don't control whilst most of the industry optimises for the common case on hardware they choose.
You're absolutely right that we're often ignorant of advances elsewhere. But the economic constraint is real, and it's increasingly unusual.
I think we as software developers are resting on the shoulders of giants. It's amazing how fast and economical stuff like redis, nginx, memcached, and other 'old ' software are written decades ago, mostly in C, by people who really understood what made them run fast (in a slightly different way to games, less about caches and data, and more about how the OS handles low level primitives).
A browser like Chrome also rests on a rendering engine like Skia, that has been optimized to the gills, so at least performance can be theoretically fast.
Then one tries to host static files on a express webserver, and is suprised to find that a powerful computer can only serve files at 40MB/s with the CPU at 100%.
I would like to think that a 'Faustian deal' in terms of performance exists - you give up 10,50,90% of your performance in exchange for convenience.
But unfortunately experience shows there's no such thing, arbitrarily powerful hardware can be arbitrarily slow.
And as you contrast gamedev to other domains who get to hide latency, I don't think its ok that a simple 3 column gallery page takes more than 1 second to load, people merely tolerate this not enjoy it.
And ironically I find that a lot of folks end up optimizing their React layouts way more than what it'd have cost to render naively with a more efficient toolkit.
I am also not sure what advances game dev is missing out on, I guess devs are somewhat more reluctant to write awful code in the name of performance nowadays, but I'd love to hear what advances gamedev could learn from the broader software world.
The TLDR version of what I wanted to say, is I wish there was a linear performance-convenience scale, where we could pick a certain point and use techniques conforming to that, and trade two thirds of the max speed for dev experience, knowing our performance targets allow for that.
But unfortunately that's not how it works, if you choose convenience over performance, your code is going to be slow enough that users will complain, no matter what hardware you have.
It clearly didn’t come out of game dev. Many people doing high performance work on either embedded or “big silicon” (amd64) in that era were fully aware of the importance of locality, branch prediction, etc
But game dev, in particular Mike Acton, did an amazing job of making it more broadly known. His CppCon talk from 2014 [0] is IMO one of the most digestible ways to start thinking about performance in high throughput systems.
In terms of heroes, I’d place Mike Acton, Fabian Giesen [1], and Bruce Dawson [2] at the top of the list. All solid performance-oriented people who’ve taken real time to explain how they think and how you can think that way as well.
I miss being able to listen in on gamedev Twitter circa 2013 before all hell broke loose.
[0] https://youtu.be/rX0ItVEVjHc?si=v8QJfAl9dPjeL6BI
[1] https://fgiesen.wordpress.com/
[2] https://randomascii.wordpress.com/
There's also good reasons that immediate mode GUIs are largely only ever used by games, they are absolutely terrible for regular UI needs. Since Rust gaming is still largely non-existent, it's hardly surprising that things like 'egui' are similarly struggling. That doesn't (or shouldn't) be any reflection on whether or not Rust GUIs as a whole are struggling.
Unless the Rust ecosystem made the easily predicted terrible choice of rallying behind immediate mode GUIs for generic UIs...
>Unless the Rust ecosystem made the easily predicted terrible choice of rallying behind immediate mode GUIs for generic UIs...
That's exactly what they did :D
They didn't. Biggest Rust GUI by popularity is Dioxus.
I mean, fair enough, but [at least] wikipedia agrees with that take.
> Graphical user interfaces traditionally use retained mode-style API design,[2][5] but immediate mode GUIs instead use an immediate mode-style API design, in which user code directly specifies the GUI elements to draw in the user input loop. For example, rather than having a CreateButton() function that a user would call once to instantiate a button, an immediate-mode GUI API may have a DoButton() function which should be called whenever the button should be on screen.[6][5] The technique was developed by Casey Muratori in 2002.[6][5] Prominent implementations include Omar Cornut's Dear ImGui[7] in C++, Nic Barker's Clay[8][9] in C and Micha Mettke's Nuklear[10] in C.
https://en.wikipedia.org/wiki/Immediate_mode_(computer_graph...
[Edit: I'll add an update to the post to note that Casey Muratori simply “coined the term” but that it predates his video.]
Dig out any source code for Atari, Spectrum or Commodore 64 games, written in Assembly, or early PC games, for example.
And you will see which information is more accurate.
Yeah no doubt you're correct. I wasn't disagreeing - just establishing the reasonableness of my original statement. I must have read it in the Dear ImGui docs somewhere.
I am pretty sure there are people here qualified enough to edit that Wikipedia page in a proper way.
Wikipedia clearly has never been shown to have faults regarding accuracy.
{{cn}}
> Maybe he might have made it known to people
Yes, he coined the term rather than invent the technique
He definitely did not name it. IRIS GL was termed “immediate mode” back in the 80’s.
He coined the term in the context of UI, by borrowing the existing term that was already used in graphics. Drawing that parallel was the point.
It might be more accurate to say that he repopularized the term among a new generation of developers. Immediate vs Retained mode UI was just as much a thing in early GUIs.
It was a swinging pendulum. At first everything was immediate mode because video RAM was very scarce. Initially there was only enough VRAM for the frame buffer, and hardly any system RAM to spare. But once both categories of RAM started growing, there was a movement to switch to retained mode UI frameworks. It wasn’t until the early 00’s that GPUs and SIMD extensions tipped the scales in the other direction - it was faster to just re-render as needed rather than track all these cached UI buffers, and allowed for dynamic UI motifs “for free.”
My graying beard is showing though, as I did some gave dev in the late 90’s on 3Dfx hardware, and learned UI programming on Win95 and System 7.6. Get off my lawn.
I won't be bothered to go hunting for digital copies of 1980's game development books, but I have my doubts on that.
Your recent post resonated with me deeply, as someone heavily invested in the Rust GUI I've fallen into this same conundrum. I think ultimately the Rust GUI ecosystem is still not mature and as a consequence we have to make big concessions when picking a framework.
I also came to a similar endpoint when building out a fairy large GUI application using egui. While egui solves the "draw widgets" part of building out the application, inevitably I had to restructure my app entirely with a new architecture to make it maintainable. In many places the "immediate" nature of the GUI mutable editing the state was no longer an advantage. Not to mention that UI code I wrote 6 months ago became difficult to read, especially if there was advanced layout happening.
Ultimately I've boiled my choices down to:
- egui for practicality but you pay the price in architecture + styling
- iced for a nice architecture but you have to roll all your own widgets
- slint maybe one day once they make text rendering a higher priority but even then the architecture side is not solved for you either
- tauri/dioxus/electron if you're not a purist like me
- Rewind 20 years and use Qt/WPF/etc.
If your main gripe about the Rust GUI ecosystem is that it's not mature then rewinding 20 years and using Qt/WPF/etc sounds like an excellent alternative. Old and mature versus modern and immature.
> Rust GUI is in a tough spot right now with critical dependencies under-staffed and lots of projects half implemented.
Down the stack, low-level 3D acceleration is in a rough spot too unfortunately. The canonical Rust Vulkan wrapper (Ash) hasn't cut a release for nearly two years, and even git main is far behind the latest spec updates.
I am not convinced a thin FFI wrapper needs frequent updates, pending updates to the underlying API. What updates do you think it should have?
The underlying Vulkan API is updated constantly, the last spec update was about two weeks ago. Even if we only count the infrequent major milestone versions, Ash is still stuck at Vulkan 1.3, when Vulkan 1.4 launched in December of 2024.
Damn, I just dove back into a vulkan project I was grinding through to learn graphics programing, life and not having the time to chase graphic programming bugs led me to put it aside for a year and a half and these new models were able to help me squash my bug and grok things fully to dive back in, but I never even consider that the rust vulkan ecosystem was worse off. it was already an insane experience getting imgui, winit and ash to play nice together, after bouncing back and forth between WGPU, I assume vulkan via ash was the safer bet.
IIRC there is another raw vulkan library that just generated bindings as well and stayed up to date but that comes with its own issues.
Vulkano? I remember that! Looks like it was updated last week, but I don't know if it's current with the Vulkan API, nor how it generally compares to Ash.
WGPU + Winit + EGUI + EGUI component libs is its own joy of compatibility, but anecdotally they have been updating in reasonable sync. things can get out of hand if you wait too long between updates though!
Vulkano is a somewhat higher level library which aims to be safe and idiomatic. It looks like it generates its own Vulkan bindings directly from the vk.xml definitions, but it also depends on Ash, and this comment suggests that both generators need to be kept in sync so they're effectively beholden to Ash's release cadence anyway.
https://github.com/vulkano-rs/vulkano/blob/master/Cargo.toml...
Maybe that's so they can interop with other crates which use Ash's types?
What would be the best way to use Vulkan 1.4 in Rust today? Using the C headers with bindgen or writing my own vk.xml generator?
Ah... that does make sense.
vk.xml[1] is the canonical Vulkan specification; this is updated essentially weekly.
The C++ equivalent, Vulkan-Hpp[2], follows extremely closely behind. Plus, ash isn't just an FFI wrapper; it does quite a bit of RAII-esque state and function pointer management that is generally required for Vulkan.
[1]: https://github.com/KhronosGroup/Vulkan-Docs/blob/main/xml/vk...
[2]: https://github.com/KhronosGroup/Vulkan-Hpp/
The canonical Vulkan wrapper is wgpu.
WGPU is a much higher level abstraction layer which itself depends on Ash for Vulkan FFI.
https://github.com/gfx-rs/wgpu/blob/trunk/Cargo.toml#L264
Thank you I didn’t know that. I assume it is well maintained then? Are there outstanding issues?
In my experience immediate mode guis almost always ignore internationalization and accessibility.
The thing you get by using an OS widget and putting a string in it is that the OS can interact with the string. It can read it out load, translate it, fill it in with a password, look it up in a dictionary, edit it right to left, handle input method editors whose hot keys are in conflict with app doing its own editing, etc…
There’s a reason why the most popular ImGUIs are targeted at game dev tools and in game dev uis and not end user uis
You could potentially make an Immediate mode gui that wrapped a retained gui. arguably that is what react is. From the programmers pov it’s supposed to look like imgui code all the way down. It runs into the issues of having to keep to two representations in sync. The ui represented by react and the actual widgets (html or native) and that’s where all its complications come from
Yes, one argument that I didn't make in the post but that does favor immediate mode is that you can somewhat straightforwardly convert from an immediate mode GUI to retained mode by just introducing your own abstractions. In some sense this makes you more disciplined about the FPS which could be a net win over all.
[Note that Tritium at least is translated into a number of a different languages. That part isn't that hard.]
This is why I'm using LLMs to help me hand code the GUI for my Rust app in SDL2. I'm hoping that minimizing the low-level, drawing-specific code and maximizing the abstractions in Rust will allow me to easily switch to a better GUI library if one arises. Meanwhile, SDL is not half bad.
Honestly I think all native GUI is in a tough spot right now. The desktop market has matured so there aren't any large companies willing to put a ton of money into new fully featured GUI libraries. What corporate investment we do see into new technologies (Electron, SwiftUI, React Native) is mainly to allow developers to reuse work from other platforms like web and mobile in order to cut costs on desktop development. Without that corporate investment I don't think we'll ever see any new native GUI libraries become as fully featured as Win32 or Qt Widgets.
I 100% agree on pretty much everything. The "webapp masquerading as a native app" is a huge problem, and IMO, at least partially because of a failure of native-language tooling (everything from UI frameworks to build tools --- as the latter greatly affect ease of use of libraries, which, in turn, affects popularity with new developers).
To be honest, I've been (slowly) working towards my own native GUI library, in C. It's a big undertaking, but one saving grace is that --- at least on my part --- I don't need the full featureset of Qt or similar.
My plan for the portability issue is to flip the script --- make it a native library that can compile to the web (using actual DOM/HTML elements there, not canvas/WebGL/WGPU). And on Android/iOS/etc, I can already do native anyway.
Though I should add that a native look is not a goal in my case (quite a few libraries already go for that, go use those! --- and some, like Windows, don't really have a native look), which also means that I don't have to use native widgets on e.g. Android. The main reason for using DOM on the web is to be able to provide for a more "web-like" experience, to get e.g. text selection working properly, as well as IME, easier debuggability, and accessibility (an explicit goal, though not a short-term one --- in part due to a lack of testers). Though it wouldn't be too much of a stretch to allow either canvas or DOM on the web at that point --- by treating the web the same as a native platform in terms of displaying the widgets.
It's more about native performance, low memory use, and easy integration without a scripting engine inbetween --- with a decent API.
I am a bit on the fence between an immediate-mode vs retained-mode API. I'll probably do a semi-hybrid, where it's immediate-y but with a way to explicitly provide "keys" (kind of like Flutter, I think?).
Open source GUI development is perpetually cursed by underestimating the difficulty of the problem.
A mature high-quality GUI with support for all the features of a modern desktop UI, accessibility, support for all the display variations you encounter in the wild, high quality rendering, high performance, low overhead, etc. is a development task on par with creating a mature game engine like Unity.
Nearly all open source GUI projects get 80% of the way there and stall, not realizing that they are only 20% of the way there.
You're right, and I think that's because the core functionality of a UI lib is not too difficult. I've tinkered in that space myself, and it's a fun side project.
Then you start to think about full unicode support, right-to-left rendering, and so on. Then you start to think about properly implementing accessibility features. The necessary work increases by a magnitude. And it's not fun work. So you stall out with a bare-bones implementation.
> We ignore for these purposes Zed's GPUI which the Zed team has transparently, and understandably abandoned as an open source endeavour
Do you have a source for this?
https://news.ycombinator.com/item?id=47003569
Ok so it is not going closed source, they are just going to extend it as they need to drive Zed features. Totally understandable for an in-house UI framework, this is why you’d build one yourself anyway. I can imagine maintaining backwards compatibility, doing releases, writing documentation and growing a community around it is a considerable distraction from their product work.
The Zed team said it themselves. There is a direct quote in the parent thread.
I'd love to read a writeup of the state of Rust GUI and the ecosystem if you could point me at one.
https://www.boringcactus.com/2025/04/13/2025-survey-of-rust-...
I started writing a program that needed to have a table with 1 million rows. This means it needs to be virtualised. Pretty common in GUI libraries. The only Rust GUI library I found that could do this easily was gpui-component (https://github.com/longbridge/gpui-component). It also renders text crisply (rules out egui), looks nice with the default style (rules out GTK, FLTK, etc.), isn't web-based (rules out Dioxus), was pretty easy to use and the developers were very responsive.
Definitely the best option today (I would say it's probably the first option that I haven't hated in some way). The only other reasonable choices I would say are:
* egui - doesn't render very nicely and some of the APIs are amateurish, but it's quick and it works. Good option for simple tools.
* Iced - looks nice and seemed to work fairly well. No virtualised lists though.
* Slint (though in some ways it is weird and it requires quite a lot of boilerplate setup).
All the others will cause you pain in some way. I think the "ones to watch" are:
* Makepad - from the demos I've seen this looks really cool, especially for arty GUI projects like synthesizers and car UIs. However it has basically no documentation so don't bother yet.
* Xilem - this is an attempt to make an 100% perfect Rust GUI library, which is cool and all but I imagine it also will never be finished.
I wouldn't bother watching Makepad. They're in the process of rewriting the entire thing with AI and (it seems to me) destroying any value they has accumulated. And I also suspect Xilem will never be finished.
Beyond egui/Iced/Slint, I'd say the "ones to watch" are:
* Freya
* Floem
* Vizia
I think all three of those offer virtualized lists.
Dioxus Native, the non-webview version of Dioxus is also nearing readiness.
I’m currently writing an application that uses virtual lists in GTK: GtkListView, GtkGridView, there may be others. You ruled out GTK because of its looks I guess, I’m targeting Linux so the looks are perfect.
Yeah, I need cross platform, and GTK looks quite foreign on Windows/macOS IMO. I toyed with custom themes, but couldn't find any I liked for a cross platform look (wanted something closer to Fluent UI).
Not just because of its looks to be fair. Not being native Rust is a pain, and GTK only really works nicely on Linux. At least without a ton of effort to fix everything (I think some apps like maybe Mypaint have done that, but I don't want to).
I believe latest Iced versions do have a `Lazy` widget wrapper, but I believe that effectively means you need to make your own virtual list on top of it
Custom widgets aren’t particularly hard to do in iced, but I wish some of those common cases would be committed back / made available.
Except the above virtualised lists, another case I hit was layered images (sprites for example). Not very hard to write my own, sure, but it’d be nice to have that out of the box as in eg. egui
I've been somewhat involved in a project using Iced this week, seems pretty reasonable. Not sure how tricky it would be to e.g. invent custom widgets though.
I don't feel like having one main library for creating windows it's bad, I feel like that way the work gets shared and more collaboration happens
Really? It seems better than ever to me now that we have gpui-component. That seems to finally open doors to have fully native guis that are polished enough for even commercial release. I haven't seen anything else that I would put in that category, but one choice is a start.
The problem is that Zed has understandably and transparently abandoned supporting GPUI as an open source endeavour except to the extent contributions align with its business mission.
I remember when that came out, but I'm not sure I understand the concern. They use GPUI, so therefore they MUST keep it working and supportable, even if updating it isn't their current priority. Or are you saying they have a closed source fork now?
Actually, this story is literally them changing their renderer on linux, so they are maintaining it.
> except to the extent contributions align with its business mission
Isn't that every single open source project that is tied to a commercial entity?
I don't know what the message means exactly, but I can't plan to build on GPUI with it out there, especially when crates that don't carry that caveat are suffering from being under-resourced.
IMO, as long as Zed uses it, we are safe. If it doesn't, we aren't. I'm keeping it that simple.
I tried gpui recently and I found it to be very, very immature. Turns out even things like input components aren't in gpui, so if you want to display a dialog box with some text fields, you have to write it from scratch, including cursor, selection, clipboard etc. — Zed has all of that, but it's in their own internal crates.
Do you know how well gpui-component supports typical use cases like that? Edit boxes, buttons, scroll views, tables, checkbox/radio buttons, context menus, consistent native selection and clipboard support, etc. are table stakes for desktop apps.
Yeah, running just gpui is kinda like writing a react app without a component library. It is going to be on you to implement all your components.
All of those are handled. Run the "story" app. It is very impressive IMO.
Components list: https://longbridge.github.io/gpui-component/docs/components/
I'm not sure about that analogy: HTML provides the basic components atombender laments are missing from GPUI.
Thank you, that looks very promising indeed.
Can I humbly ask how are LLMs and Rust GUIs related?
They're just straining already-strained resources on the "contributions" side and pushing interest in other directions (e.g. Electron).
What’s the point of writing open source if it’s just going to be vacuumed up by the AI companies and regurgitated for $20 a month.
iced is doing great