I’m a bit of an optimist. I think this will smack the hands of developers who don’t manage RAM well and future apps will necessarily be more memory-efficient.
I’m a bit of an optimist. I think this will smack the hands of developers who don’t manage RAM well and future apps will necessarily be more memory-efficient.
Hope dies last, as they say.
Then again, after many, many years of claims that the following year would be the year of the Linux Desktop, there seems to be more and more of a push into that direction. Or at least into a significant increase in market share. We can thank a current head of state for that.
like 1973 oil crisis? End of V8 engines (pun intended)
Yeah, like that. Modern engines eclipsed pre-1970s engines in performance, efficiency, and even (yes I said it) in reliability.
At a cost of simplicity and beauty. And two lost decades of mediocre performance. Sigh
> I think this will smack the hands of developers who don’t manage RAM well
And hopefully kill Electron.
I have never seen the point of spinning up a 300+Mb app just to display something that ought to need only 500Kb to paint onto the screen.
As if native apps are any better. Books app on my mac takes 400MB without even having a single book open.
Won't happen. People are ok with swapping to their SSDs, Macbook Neo confirms that
It's happening. Cursor 3 moved to rust. A lot of people are using Zed (rust) instead of vscode.
It won't be "happening" until Slack, Teams, and Discord leave Electron behind. They are the apps that need to be open 24/7.
It's not entirely clear what the connection is.
We're not doing Electron because some popular software also using it. We're doing Electron because the ability to create truly cross-platform interfaces with the web stack is more important to us than 300 MB of user memory.
> web stack is more important to us than 300 MB of user memory.
May I never have to use or work on your project's software.
"I would rather spend the user's money than my engineer's time"
Teams works similarly in browser tab and "natively". Slack was similar if I remember correctly.
You should check the memory use of that browser tab. You’re not saving much either way running in a browser or in Electron, which is effectively a browser.
I only ever use Discord in a browser window.
Are you sure about Cursor? I haven't seen anything about that, I think it's still based on VSCode/electron.
"cursor 3" is just a landing page. The editor is still the old vscode fork...
The point is being able to write it once with web developers instead of writing it a minimum of twice (Windows and macOS) with much harder to hire native UI developers.
There is native to the OS and there's native to the machine.
Anyways, I'm both cases you don't really have to write it twice.
Native to the OS: write only the UI twice, but implement the Core in Rust.
Native to the machine: Write it only once, e.g. in iced, and compile it for every Plattform.
And HTML/CSS/JS are far more powerful for designing than any of SwiftUI/IB on Apple, Jetpack/XML on Android, or WPF/WinUI on Windows, leaving aside that this is what designers, design platforms and AI models already work best with. Even if all the major OSes converged on one solution, it still wouldn't compete on ergonomics or declarative power for designing.
Lol SwiftUI/Jetpack/WPF aren’t design tools, they’re for writing native UI code. They’re simply not the right tool for building mockups.
I don’t see how design workflows matter in the conversation about cross-platform vs native and RAM efficiency since designers can always write their mockups in HTML/CSS/JS in isolation whenever they like and with any tool of their choice. You could even use purely GUI-based approaches like Figma or Sketch or any photo/vector editor, just tapping buttons and not writing a single line of web frontend code.
Who said anything about mockups? Design goes all the way from concept to real-world. If a designer can specify declaratively how that will look, feel, and animate, that's far better than a developer taking a mockup and trying their hardest to approximate some storyboards. Even as a developer working against mockups, I can move much faster with HTML/CSS than I can with native, and I'm well experienced at both (yes, that includes every tech I mentioned). With native, I either have to compromise on the vision, or I have to spend a long time fighting the system to make it happen (...and even then)
well, then you are really bad at native and should not be comparing those technologies despite your claims otherwise (which make little sense).
> really bad at native
Yikes. I spent 15 years developing native on both mobile and desktop. If you think that native has the same design flexibility as HTML/CSS, you're objectively wrong.
By design, each operation system limits you to their particular design language, and styling of components is hidden by the API making forward-compatible customisation impossible. There's no escaping that. And if you acknowledge that fact, you can't then claim native has the same design flexibility as HTML/CSS. If you don't acknowledge that fact, you're unhinged from reality.
There's pros and cons to the two approaches, of course. But that's not what's being debated here.
The real disconnect is that the user doesn't really care all that much. It's mostly the designers who care. And Qt for example but also WPF let you style components almost to unrecognizable and unusable results. So if everyone will need to make do with 8GB for the foreseeable future, designers might just be told "No.", which admittedly will be a big shock to some of them. Or maybe someone finally figures out how to do HTML+CSS in a couple of megabytes.
You mean the point is to dump it all on the end user's machine, hogging its resources.
It's bad enough having to run one boated browser, now we have to run multiples?
This is not the right path.
As the kids say: skill issue!
The point is you can be lazy and write the app in html and js. Then you dont need to write c, even though c syntax is similar to js syntax and most gui apps wont require needing advanced c features if the gui framework is generous enough.
Now that everyone who cant be bothered, vibe codes, and electron apps are the overevangelized norm… People will probably not even worry about writing js and electron will be here to stay. The only way out is to evangelize something else.
Like how half the websites have giant in your face cookie banners and half have minimalist banners. The experience will still suck for the end user because the dev doesnt care and neither do the business leaders.
Syntax ain't the problem. The semantics of C and JS could not be more different.
But the point isn’t that they’re more different than alike. The point is that learning c is not really that hard it’s just that corporations don’t want you building apps with a stack they don’t control.
If a js dev really wanted to it wouldn’t be a huge uphill climb to code a c app because the syntax and concepts are similar enough.
Honestly C and JavaScript could hardly be more different, as languages.
About the only thing they share is curly braces.
Yeah JS is closer to lisp/scheme than C (I say this as someone who writes JS, Clojure and the occasional C).
What "advanced features" are there to speak of in C? What does the syntax of C being similar to JS matter?
This comment makes no sense.
Well theres the whole c89 vs c99. I’ll let you figure the rest out since it’s a puzzle in your perspective.
You do need a couple framebuffers, but for the most part yeah...
Who cares about 300Mb, where is that going to move the needle for you? And if the alternative is a memory-unsafe language then 300Mb is a price more than worth paying. Likewise if the alternative is the app never getting started, or being single-platform-only, because the available build systems suck too bad.
There ought to be a short one-liner that anyone can run to get easily installable "binaries" for their PyQt app for all major platforms. But there isn't, you have to dig up some blog post with 3 config files and a 10 argument incantation and follow it (and every blog post has a different one) when you just wanted to spend 10 minutes writing some code to solve your problem (which is how every good program gets started). So we're stuck with Electron.
> And if the alternative is a memory-unsafe language
and if not?
> and if not?
If the alternative is memory-safe and easy to build, then maybe people will switch. But until it is it's irresponsible to even try to get them to do so.
Until? Just take what's out there - it's so easy to improve on Electron
Like what? Where else (that's a name brand platform and not, like, some obscure blog post's cobbled-together thing) can I start a project, push one button, and get binaries for all major platforms? Until you solve that people will keep using Electron.
There are quite a few options. Many of them look dated though. I think that's the usp of electron.
There's a world of difference between using a memory safe language and shipping a web browser with your app. I'm pretty sure Avalonia, JavaFX, and Wails would all be much leaner than electron.
The people who hate Electron hate JavaFX just as much if not more, and I'm not sure it would even use less memory. And while the build experience isn't awful, it's still a significant amount of work to package up in "executable" form especially for a platform different from what you're building on, or was until a couple of years ago. And I'm pretty sure Avalonia is even worse.
The demand is being driven by inference though. I really don't think there will be much motivation.
The large models are incredibly inefficient. We'll be squeezing them down for generations.
Right, that's where the major push is right now. Not with shrinking down some code libraries.
oh that would be a dream
Using a lot less RAM often implies using more CPU, so even with inflated RAM prices, it's not a good tradeoff (at least not in general).
In practice, you generally see the opposite. The "CPU" is in fact limited by memory throughput. (The exception is intense number crunching or similar compute-heavy code, where thermal and power limits come into play. But much of that code can be shifted to the GPU.)
RAM throughput and RAM footprint are only weakly related. The throughput is governed by the cache locality of access patterns. A program with a 50MB footprint could put more pressure on the RAM bus than one with a 5GB footprint.
You're absolutely right? I don't really disagree with anything you're saying there, that's why I said "generally" and "in practice".
Reducing your RAM consumption is not the best approach to reducing your RAM throughput is my point. It could be effective in some specific situations, but I would definitely not say that those situations are more common than the other ones.
I don't understand how this connects to your original claim, which was about trading ram usage for CPU cycles. Could you elaborate?
From what I understand, increasing cache locality is orthogonal to how much RAM an app is using. It just lets the CPU get cache hits more often, so it only relates to throughout.
That might technically offload work to the CPU, but that's work the CPU is actually good at. We want to offload that.
In the case of Electron apps, they use a lot of RAM and that's not to spare the CPU
> increasing cache locality is orthogonal to how much RAM an app is using. It just lets the CPU get cache hits more often, so it only relates to throughout.
Cache misses mean CPU stalls, which mean wasted CPU (i.e. the CPU accomplises less than it could have in some amount of time).
> In the case of Electron apps, they use a lot of RAM and that's not to spare the CPU
The question isn't why apps use a lot of RAM, but what the effects of reducing it are. Redcuing memory consumption by a little can be cheap, but if you want to do it by a lot, development and maintenance costs rise and/or CPU costs rise, and both are more expensive than RAM, even at inflated prices.
To get a sense for why you use more CPU when you want to reduce your RAM consumption by a lot, using much less RAM while allowing the program to use the same data means that you're reusing the same memory more frequently, and that takes computational work.
But I agree that on consumer devices you tend to see software that uses a significant portion of RAM and a tiny portion of CPU and that's not a good balance, just as the opposite isn't. The reason is that CPU and RAM are related, and your machine is "spent" when one of them runs out. If a program consumes a lot of CPU, few other programs can run on the machine no matter how much free RAM it has, and if a program consumes a lot of RAM, few other programs can run no matter how much free CPU you have. So programs need to aim for some reasonable balance of the RAM and CPU they're using. Some are inefficient by using too little RAM (compared to the CPU they're using), and some are inefficient by using too little CPU (compared to the RAM they're using).
Only if the software is optimised for either in the first place.
Ton of software out there where optimisation of both memory and cpu has been pushed to the side because development hours is more costly than a bit of extra resource usage.
The tradeoff has almost exclusively been development time vs resource efficiency. Very few devs are graced with enough time to optimize something to the point of dealing with theoretical tradeoff balances of near optimal implementations.
That's fine, but I was responding to a comment that said that RAM prices would put pressure to optimise footprint. Optimising footprint could often lead to wasting more CPU, even if your starting point was optimising for neither.
My response was that I disagree with this conclusion that something like "pressure to optimize RAM implies another hardware tradeoff" is the primary thing which will give, not that I'm changing the premise.
Pressure to optimize can more often imply just setting aside work to make the program be nearer to being limited by algorithmic bounds rather than doing what was quickest to implement and not caring about any of it. Having the same amount of time, replacing bloated abstractions with something more lightweight overall usually nets more memory gains than trying to tune something heavy to use less RAM at the expense of more CPU.
You're thinking an algorithmic tradeoff, but this is an abstraction tradeoff.
Some of the algorithms are built deep into the runtime. E.g. languages that rely on malloc/free allocators (which require maintaining free lists) are making a pretty significnant tradoff of wasting CPU to save on RAM as opposed to languages using moving collectors.
Free lists aren't expensive for most usage patterns. For cases where they are we've got stuff like arena allocators. Meanwhile GC is hardly cheap.
Of course memory safety has a quality all its own.
> Free lists aren't expensive for most usage patterns.
Whatever little CPU they waste is often worth more than the RAM they save.
> For cases where they are we've got stuff like arena allocators.
... that work by using more RAM to save on CPU.
GC burns far more CPU cycles. Meanwhile I'm not sure where you got this idea about the value of CPU cycles relative to RAM. Most tasks stall on IO. Those that don't typically stall on either memory bandwidth or latency. Meanwhile CPU bound tasks typically don't perform allocations and if forced avoid the heap like the plague.
> GC burns far more CPU cycles
Far less for moving collectors. That's why they're used: to reduce the overhead of malloc/free based memory management. The whole point of moving collectors is that they can make the CPU cost of memory management arbitrarily low, even lower than stack allocation. In practice it's more complicated, but the principle stands.
The reason some programs "avoid the heap like the plague" is because their memory management is CPU-inefficient (as in the case of malloc/free allocators).
> Meanwhile I'm not sure where you got this idea about the value of CPU cycles relative to RAM
There is a fundamental relationship between CPU and RAM. As we learn in basic complexity theory, the power of what can be computed depends on how much memory an algorithm can use. On the flip side, using memory and managing memory requires CPU.
To get the most basic intuition, let's look at an extreme example. Consider a machine with 1 GB of free RAM and two programs that compute the same thing and consume 100% CPU for their duration. One uses 80MB of RAM and runs for 100s; the other uses 800MB of RAM and runs for 99s (perhaps thanks to a moving collector). Which is more efficient? It may seem that we need to compare the value of 1% CPU reduction vs a 10x increase in RAM consumption, but that's not necessary. The second program is more efficient. Why? Because when a program consumes 100% of the CPU, no other program can make use of any RAM, and so both programs effectively capture all 1GB, only the second program captures it for one second less.
This scales even to cases when the CPU consumption is less than 100% CPU, as the important thing to realise is that the two resources are coupled. The thing that needs to be optimised isn't CPU and RAM separately, but the RAM/CPU ratio. A program can be less efficient by using too little RAM if using more RAM can reduce its CPU consumption to get the right ratio (e.g. by using a moving collector) and vice versa.
Moving collectors as generally used are a huge waste of memory throughput, and this shows up consistently in the performance measurements. Moving data is very expensive! The whole point of ownership tracking in programming languages is so that large chunks of "owned" data can just stay put until freed, and only the owning handle (which is tiny) needs to move around. Most GC programming languages do a terrible job of supporting that pattern.
hopefully not implying needing a gc for memory safety...
Yeah, there's always Fil-C (Rust isn't memory safe in practice).
Or just using less electron and writing less shit code.