A year ago, my co-founder launched Desktop Docs here on HN. It's a Mac app we built with Electron that uses CLIP embeddings to search photos and videos locally with natural language. We got positive feedback from HN and our first paying customers, but the app was almost 1GB and clunky to use.

TLDR; rebuilding in Rust was the right move.

So we rewrote the app with Rust and Tauri and here are the results:

- App size is 83% smaller: 1GB → 172MB - DMG Installer is 70% smaller: 232MB → 69.5MB - Indexing files is faster: A 38-minute video now indexes in ~3 minutes instead of 10-14 minutes - Overall more stability (old app used to randomly crash)

The original version worked, but it didn't perform well when you tried indexing thousands of images or large videos. We lost a lot of time struggling to optimize Electron’s main-renderer process communication and ended up with a complex worker system to process large batches of media files.

For months we wrestled with indecision about continuing to optimize the Electron app vs. starting a full rebuild in Swift or Rust. The main thing holding us back was that we hadn’t coded in Swift in almost 10 years and we didn’t know Rust very well.

What finally broke us was when users complained the app crashed their video calls just running in background. I guess that’s what happens when you ship an app with Chromium that takes up 200mb before any application code.

Today the app still uses CLIP for embeddings and Redis for vector storage and search, except Rust now handles the image and video processing pipeline and all the file I/O to let users browse their entire machine, not just indexed files.

For the UI, we decided to rebuild it from scratch instead of porting over the old UI. This turned out well because it resulted in a cleaner, simpler UI after living with the complexity of the old version.

The trickiest part of the migration was learning Rust. LLMs definitely help, but the Rust/Tauri community just isn’t as mature compared to Electron. Bundling Redis into the app was a permissioning nightmare, but I think our solution with Rust handles this better than what we had with Electron.

All in, the rebuild took about two months and still needs some more work to be at total parity with its Electron version, but the core functionality of indexing and searching files is way more performant than before and that made it worth the time. Sometimes you gotta throw away working code to build the right thing.

AMA about Rust/Tauri migration, Redis bundling nightmares, how CLIP embeddings work for local semantic search, or why Electron isn't always the answer.

I recently went the other way (started a project in Tauri, moved to Electron) because of frustration with rendering differences between the web views employed on different platforms. Have you run into any cross platform UI bugs since you switched?

It looks like your UI needs are pretty simple while computation is complex so the extra QA tradeoff would still be worth it for you. I'm just wondering if my experience was unusual or if rendering differences are as common as they felt to me.

Also, did you go Tauri 2.0 or 1.0? 2.0 released its first stable release while I was mid-stream on v1, and migration was a nightmare/documentation was woefully inadequate. Did they get the docs sorted out?

We are using system webviews for https://kreya.app (not Tauri, but a custom implementation) and the platform differences are seldom a problem...

Polyfills fix most of the things and we are running automated end to end test on Linux, which catches most of the issues.

IMO the most difficult thing is figuring out how far the users are behind with their webview version, mostly on Linux and macOS. Windows has done thinga right with their WebView2 implementation

On the contrary it is a big big issue if you have a complex web app like we do. It was a PITA to deal with user bugs in a specific macos version with a 8y out of date webview.

And the performances of webkitgtk are horrible on Linux.

That’s a big issue if you can’t set a minimum required version for some reason, same for web apps in general - I rarely find much problems with platform behavior but that’s probably because we just reject out of date browsers.

That's one of the main selling points of Electron: it ships a single browser instead of using the system webview.

The main drawback, of course, is that it ships a browser with every app.

Yeah for a small tertiary apps then Electron is a huge performance burden to put on your users. They might just choose it's not worth it and not use your app.

On the contrary if it's a large app that the user spends lots of time in, then the performance overhead might well be worth it for the user.

Imagine in the first case that it requires a base load of 10 units of energy to run and gives 2 units of output, while in the second it still costs 10 units of base load energy, but now it gives 100 units of output. The base load becomes relatively irrelevant.

Yeah I was wondering how you dealing with the inconsistency of the different webviews? Are you using jquery? Or data star? Or is your own custom made polyfill depending on your user base?

TBH, a lite weight polyfill for most system webview would be refreshing change to all the spa frameworks out there.

Wait - there's system webviews? On Mac, Windows, and Linux?

Edit: It looks like Tauri uses the following platform webview features.

https://github.com/tauri-apps/wry?tab=readme-ov-file#platfor...

This looks fantastic! Any deets on the stack?

The UI inside the webview is written in Angular, everything else in C#.

More on the stack and our initial issues can be read here: https://kreya.app/blog/how-we-built-kreya/#cross-platform-gu... (from 2021)

We actually haven't rolled out cross platform support yet with the Tauri version, so we will see how that goes. Our UI needs are simple, luckily. What kind of rendering differences were you seeing with Tauri? Was there one platform that worked the best/worst for your app? We'd love to support Windows next.

With the Electron version of the app, we had issues running our bundled binaries on Macs with Intel chip. That caused us so many headaches that we decided for the rebuild on Tauri that we wanted to focus on one platform first (Macs with Apple chip) before supporting other platforms.

We went with Tauri 1.4 and no issues so far. Will have to check out the docs for 2.0 migration and see what that looks like.

I worked with an open-source project that uses Tauri 1.x and their migration has been blocked with issues for months. It was a nightmare for the span I was involved in, and it looks like it hasn't moved forward since I stopped.

In particular, rendering and crashing issues specific to Linux have been blockers, but Tauri 1.x also has other rendering issues on Linux that 2.0 fixed. There's little to no guidance on what's causing the stability and new rendering problems or how to fix them.

The app I worked on was a launcher that installed and managed content for an app, and the launcher invoked the app through command-line flags. Those flags arbitrarily fail to be passed in Tauri 1.x but work as expected in Tauri 2.x, but nobody we asked about it knows why.

Also the multi repo nature of the Tauri project, and the changes to this structure between 1.x and 2.0, makes migration hard.

Tauri 2.0 migration can potentially give you some more performance benefits, because they've greatly enhanced the JS-Rust bridge especially when you're moving lots of data.

We picked egui instead of tauri because we had more rust skills than web skills. Other than rastered text, whoch I can't find any customers who actually care, are there good reasons to go tauri? It seems widely used, but also widely complained about.

I think the main good reason to go Tauri is getting access to the full JS ecosystem, especially if you already have experience with that. If you don’t not really any reason to complicate your life I think.

My app needs wysiwyg editors, and JS is full of them.

Who is 'we'? :)

Would you have any webpage or product info, possibly with screenshots?

We're building an in-house DCC with egui so I'm curious.

No, our customers use it internally. Not even sure if they will end up deploying it as a website or a windows application. Which is an advantage of egui i suppose. I just picked it because I don't know any html or javascript and it does all that stuff for you, so it is just a rust app you can browse to. My team didn't know much more JavaScript than I do and they seemed to like it fine, but not as well as something declarative, which it is not.

Nothing flat out broke, but I am developing and dogfooding on a Mac, so I get visual QA for free on that platform. When I tested my app on a Windows machine, I noticed several UI regressions that looked/felt really janky. I didn't get as far as testing a Linux build, but I assume I'd find more issues there.

I can't remember why I wanted to migrate to 2.0 now, but there was a nice-to-have that I couldn't do in 1.4. I ended up abandoning the 2.0 migration after a slew of cryptic errors, took a step back, and decided I'd be better off using Electron for my project. My app is at heart a rich UI text editor and none of the computation is that expensive. With all the value add coming from the interface, optimizing for consistency there feels right.

Interesting you ran into UI regressions on Windows. I'm looking forward for us to get to that point where we can test on a Windows and see what changes...hopefully it's smooth.

With Electron's UI powered by the same browser across platforms, you end up with a much more consistent experience. Makes sense to optimize for that.

> I recently went the other way (started a project in Tauri, moved to Electron) because of frustration with rendering differences between the web views employed on different platforms.

This is our #1 frustration with Tauri. The OS-provided system webviews are not stable, repeatable, consistent platforms to build upon.

Tauri decided that a key selling point of their platform was that Tauri builds won't bundle a browser runtime with your application. Instead, you wind up with whatever your operating system's browser runtime is. Each OS gets a different runtime.

Sounds nice on paper, but that has turned into a massive headache for us.

Safari and Edge have super finicky non-standard behavior, and it sucks. Different browser features break frequently. You're already operating in such a weird way between the tight system sandboxing and CORS behaviors (different between each browser), the subtle differences are death by a thousand cuts. And it never seems to stop stacking up. These aren't small CSS padding issues, but rather full-blown application behavior breakages. Even "caniuse.com" is wrong about the compatibility matrix with built-in web views.

To be fair, we're using advanced browser features. Animation, 2D contexts, trying to do things like pointer lock. But these are all examples of things that are extremely different between each web view.

All of this has doubled (quadrupled - with dev and prod builds behaving so differently! - but that's another story) the amount of physical testing we have to do. It takes so much time to manually test and ship. When we were building for the web, this wasn't an issue even if people used different browsers. The webviews have incredibly different behavior than web browsers.

Their rationale for using OS-provided system webviews instead of a bundled runtime baked into the installer at build time is that it would save space. But in reality all it has done is created developer frustration. And wasted so much freaking time. It's the single biggest time sink we have to deal with right now.

We were sold on Tauri because of Rust, but the system browser runtime is just such a bad decision. A self-imposed shotgun wound to the chest.

The Tauri folks have heard these complaints, and unfortunately their approach to solving it is to put Servo support on the roadmap. That's 1000% not the right fix. Servo is not even a production-ready platform. We just want Chrome.

Please just let us bundle a modern chrome with our apps. It's not saving anyone any headache with smaller programs and installer sizes. Games are already huge and people tolerate them. Lots of software is large. It's accepted, it's okay, it's normal. We have a lot of space, but we don't have a lot of time. That's the real trade off.

I want to use Rust. I want to use Chrome.

I hope the Tauri devs are reading this. It's not just from me. This is the general community consensus.

Built-in webviews are not the selling point for Tauri. Rust is.

Why even bother with a browser runtime? It sounds like you have intricate UI needs, so you need a proper UI library. Several options exist.

There are some intangible matters of practicality. The team is more familiar with React.

I tried to use Bevy (since we also use 3D) and that wasn't ready for prime time.

I thought about Iced and Imgui and several other Rust frameworks, but given our experience with Bevy we shied away from it.

We figured we'd be able to move faster and rely on a lot of existing tooling. That's been true for the most part.

> I want to use Rust. I want to use Chrome.

So use Electron and FFI, it's not that hard

> Please just let us bundle a modern chrome with our apps.

please, no.

I wish software companies had to pay the hardware they require for their users, then we would have devs using Rust instead of JS and optimizing using ASM just to save parts of cents per instance. And we wouldn't see companies like MS kill well designed and performed native apps for a electron app

[dead]

Haven't touched Tauri because of the cross platform issues. The major appeal with Electron to me is the exact control over the browser. I'm curious about Rust integration though. I'm guessing they're doing something that provides better DX over something like https://github.com/napi-rs/napi-rs?

> Haven't touched Tauri because of the cross platform issues.

You were wise. That's the biggest issue plaguing the project right now.

> curious about Rust integration though

Tauri is written in 100% native Rust, so you write Rust for the entire application backend. It's like a framework. You write eventing and handlers and whatever other logic you want in Rust and cross-talk to your JavaScript/TypeScript frontend.

It feels great working in Rust, but the webviews kill it. They're inferior browsers and super unlike one another.

If Tauri swapped OS webviews for Chromium, they'd have a proper Electron competitor on their hands.

Sounds easier/more reasonable the other way around. Aren't there already specific libs / bridges for Rust / Electron / Node for performance heavy computations ?

Don’t try to speak for me please. I’m perfectly happy with my app looking mildly different and not weighing 1GB. I’m inclined to believe most people using Tauri do not have your issue.

I don’t quite understand why you have that issue in the first place. The fact they use the system webview is front, left and center on their website. It’s like you decided to use a fork because of the decorations on the back, and now complain that it’s pointy and the developers should just make it a spoon instead.

Because Tauri people advertise it as a complete solution to everything Electron does. But then they act surprised when people have platform difference issues that they don't have with Electron. Tauri is being actively misrepresented and people have good reason to feel duped.

> I don’t quite understand why you have that issue in the first place.

My read on it is that they didn’t understand the implications of using system webviews.

And possibly they expected Tauri would insulate them from cross-system differences without a lot of exploration.

This. We wouldn't have signed up for this if we'd have known.

Tauri needs a big fat warning label.

Is it hard to ship a Rust library with Electron? At least then it's not all C++

Just recently Tauri announced:

> This year we've got a lot of exciting innovations in store for you, like CEF and SERVO based webviews...

From their discord.

Looking forward to either being stable. I like the idea of Tauri, but I need it to work well on Linux too.

Dealing with the rendering differences isn't any more difficult with Tauri than it is when making a normal web app, is it?

With Electron, it will bundle chrome into the app so you only have to handle that single rendering engine. Tauri uses whatever is the default browser on the system the app is installed on

I know. I'm saying that Tauri doesn't make things more difficult than any normal web app development, it's not like making web apps which work across browsers is a new and scary thing

It's not a new or scary thing, but it's way more expensive (in time or money) than relying on "it looks good on my machine, so it looks good everywhere."

I've worked on large consumer-facing web-apps where we had a dedicated QA team (and/or contracting firm) that runs visual regression testing on multiple platforms and browser versions. As a solo developer, I have no interest in being that team for my hobby project. So the tradeoff with Tauri for me was "accept that I will ship obvious UI bugs" vs "accept that I will ship a bloated binary."

Reading anecdata on forums, it seems like the only people who get up in arms over an extra 200MB are HN readers, and my app isn't really targeted at them.

> "it looks good on my machine, so it looks good everywhere."

It bas always been a fallacy though, as with CSS the end result can depend on the DPI scaling and the size of the display (unless you make sure it doesn't, but then you need to test different setups to be sure).

Well, they're right to point out that it removes a lot of inconsistency though. For example, today I was working on an app which used some Unicode characters as icons in buttons, and they look perfectly centered in the button in Firefox but weirdly off-Center in Chrome.

I never managed to find a set of CSS properties which made it look good in Chrome tho. And if it was a more serious project I'd probably have used SVGs instead of Unicode characters.

Sure, the closer your config is from the actual user config the fewer issues you'll get, but I wanted to highlight that even with electron you ought to test on different configurations.

Oh, yeah, certainly. Even when I've made dinky little Electron apps I've had a Windows machine ready (I otherwise use Linux and macOS) just to make the Windows binary and do some rudimentary acceptance tests. You can't get away from testing on all your officially supported configurations.

It’s an unpopular take I’m sure, but I really have to question shipping binaries for platforms that I can’t or won’t personally test. Between platform intricacies I don’t understand, unaddressed papercuts, and limited ability to debug issues, the end result is very likely to be underwhelming and/or frustrating for users of that platform, and as such if the app is paid those users are disproportionately unlikely to convert and more likely to churn. The only real benefit I see is an extra platform icon lined up on the marketing page.

There's also the simple fact that shipping an identical UI on multiple operating systems is obviously wrong for a least some of those operating systems (and maybe all of them). You don't have to worry about "platform intricacies" because a web-based cross-platform UI toolkit is going to get really basic UI conventions wrong.

It’s not only UI design/conventions being covered, but also things like quirks with the system’s audio, compositor, window manager, etc depending on what your app does and which technologies it uses. If the dev doesn’t understand these things about a platform, they’re walking into a minefield by supporting it.

At this point people are more accustomed to chrome’s conventions than the native UI’s conventions in most cases

But there aren't really many "Chrome conventions" to speak of. Every web app (and Electron app) necessarily has to reinvent a lot of wheels that desktop apps get for free from the underlying OS. And sure, there are component libraries for that... way too many libraries, each of them doing everything slightly differently from others.

> Reading anecdata on forums, it seems like the only people who get up in arms over an extra 200MB are HN readers, and my app isn't really targeted at them

I think it's good to be wary of overly sensitive advice that regular users don't care about. But would a regular user realize they have 10 electron apps running and their ram is maxed out all the time?

The argument against Electron isn't just a single bloated binary, but everyone shipping an app that uses way more RAM than necessary.

That's a fair point. My app (and possibly the OP's) is built assuming it is the user's current focus and priority, and that you'll quit it once you're done with the task. I don't feel bad about asking for a chunk of RAM for a focal app like this, but I would feel differently if I was shipping a background tool or small utility or something.

Especially macOS is built on the assumption that you rarely "quit" apps. When you close all of an app's windows, it remains running. You actually have to hit cmd+q or right click and hit "Quit" in the Dock to really quit an app.

In other circumstances too though, it's not great UX to demand your users quit your app once they're done with it because it eats too many resources just being idle in the background. It's an issue I have with both Electron, where idle apps waste tonnes or RAM, and with many Rust UI frameworks, where an immediate-mode architecture means they'll be consuming CPU on the background.

Electron doesn't seem to support it (and even if it did, I suspect most electron developers wouldn't pay it any mind) but...

NSApplicationDelegate's -(BOOL)applicationShouldTerminateAfterLastWindowClosed:(NSApplication *)sender; method exists so an application can, uhh, automatically terminate after the last window closes.

https://developer.apple.com/documentation/appkit/nsapplicati...

It's not 100% consistent but if you look at Apple's applications based on single window (calculator, system preferences, etc), closing the window quits the app.

Electron doesn't automatically exit when the last window is closed, and the Electron getting started guide (https://www.electronjs.org/docs/latest/tutorial/tutorial-fir...) recommends this code:

    app.on('window-all-closed', () => {
        if (process.platform !== 'darwin') app.quit()
    })
Most Electron apps probably do exactly that. It would feel extremely out of place for a macOS app to quit just because you closed a window.

The function has been available for a while, but auto-closing system apps is a relatively recent change.

I can’t find obvious reference or when Apple started changing this, but it seems related to background app killing that is done now as well. I’m still not sure how I feel about it, but historically that wasn’t common for Mac apps.

The ability for an app to keep running without a window was a godsend back in the days when we were all running on hard drives and large apps had splash screens to amuse you during their multi-second launch process. If you were done with a particular document but not done with the application as a whole, you could simply close the window and next time you needed to open a document in that app you wouldn't have to sit through the splash screen again. Unless it was an app with a cross-platform GUI that didn't support this model.

Nowadays, apps closing themselves or being closed by the OS automatically is reasonable in a lot of cases, but Electron apps tend to hit the cases where it still is valuable to operate with the classic NeXT/OS X document-based app paradigm.

Part of it is respecting your customers regardless of whether they are aware of your crappy electron app consuming all their RAM and draining their battery or not. If I went to a mechanic (car analogy incoming) that repaired pressurized lines with duct tape instead of hose clamps because I wouldn’t care as long as the car worked, I would be upset, regardless of whether or not it got the job done.

Take pride in your work and respect the people using it.

At one of my jobs we actually measured those differences and even 10s of MB in app size increase moved user download, use and update metrics (negatively).

I think the person you're responding to didn't actually measure if their claim is true.

> it seems like the only people who get up in arms over an extra 200MB are HN readers, and my app isn't really targeted at them

Tauri however, is.

Some of these apps then download huge models these days and the total space savings kinda vanishes.

Absolutely. HN is unnecessarily full of "electron always bad" mindset. It makes sense for so many use cases.

This is true in a "technically correct" way, but not in a meaningful way.

It's like developing sophisticated websites for IE6-era web, with ActiveX and Java applets and this new "ajax" thing on the horizon that sure sounds nice but it'll be a decade before you can actually use it for most of your users.

The very core basics are essentially the same because yea - it's just a web browser. An <h1> will be bigger than a <p>. But they are regularly multiple years out of date, have WILDLY different security and native-access models, version specific bugs, initialization and threading requirements, performance tradeoffs, styling quirks, and you might have hundreds or thousands of versions to test against which you cannot reasonably test against because they are frequently tied to specific operating system versions that you can no longer download and install, or require hardware you do not have.

So yea. IE6-era stuff. Not an exaggeration at all.

For simple stuff they work just fine, performance is generally more than good enough, and they start up faster, use fewer resources, and lead to a much smaller install. They're entirely reasonable choices. But once you push the edges of the envelope they're an absolute nightmare, and that is the entire reason Electron exists. That is what caused Electron to become the giga-powerhouse that it is now. It solved that problem, at relatively high cost, but it is incredibly obviously worth it to anyone who has dealt with native webviews in complicated ways.

Modern browsers are a completely different game, in comparison - far more consistent and up to date in aggregate. They're utterly incomparable.

Problem is that now we have a whole generation with "Works best on Chrome" attittude.

What a waste of time it was fighting for Web freedom.

Perhaps users have different standards for a desktop app they installed.. but it's just a possibility, I do not know.

Tauri does not do this in all cases, as WebkitGTK on Linux has performance issues and is often the ugly duckling of everything.

I also feel like I will have to, yet again, trot out the comment from a Slack dev that explains why they moved _from_ per-platform webviews to Chromium. This isn't new ground being charted, plenty of companies and teams have been down this path and Electron exists for a reason.

(I am not saying Electron is _good_, I am saying that Tauri isn't the holy grail people make it out to be)

According to https://slack.engineering/building-hybrid-applications-with-..., Slack never used multiple different per-platform webviews. The earliest version of their desktop app was Mac-only and used the OS-native WebView API, but they switched to Electron at the same time they started work on making the app cross-platform. At the time, not only did Tauri not exist, but neither did the WebView2 API that it uses under the hood on Windows; they would have had to use the WebBrowser ActiveX control, which uses Internet Explorer's Trident engine and was already deprecated. So it's not so much that they rejected per-platform webviews, as that per-platform webviews were not yet really available as an option on desktop.

Isn't Spotify using CEF (Chromium Embedded Framework) rather than Electron?

Regardless, your point stands: it's a bundled Chromium on all platforms

https://news.ycombinator.com/item?id=18763449

It took me 2 seconds to find in Google, and you're splitting hairs if you think it being macOS-only was the point of my comment. Their second bullet point is just as true today as it was back then.

Slack in web works fine in several platform browsers still though.

That's not what the comment was even remotely disputing.

No, WebkitGTK is notoriously an issue.

It's awful. I'm cautiously optimistic (hopeful? naive?) that progress on servo will make it unnecessary!

The Linux port of Orion (Kagi’s power user oriented web browser) is built with WebKitGTK, some hopefully the Orion team will be patching up some of the more glaring issues. At minimum the port will put it in front of pairs of eyeballs that hadn’t been looking at it before which should help bring attention to bugs.

Is WebKitGTK as used by Tauri worse than WebKitGTK used by a web app user's web browser?

No. But also, nobody(first order approximation) uses Gnome web. I would wager most javascript web apps don't work on that browser anyways due to webkitgtk. The most popular gnome distros all come with firefox installed so even the "just use the default" folks won't be using gnome web.

WebKitGTK benefits from sharing a codebase with WebKit for iOS, which web developers do care about supporting. There can still be bugs in the Linux-specific platform integration code, but that's not most of the codebase, so any given app is less likely to run across a bug in it. It's not as reliable as more mainstream browser-OS combinations, but saying that most apps don't work is an exaggeration.

Last I checked, WebkitGTK does not have parity with WebKit on iOS, and plenty of devs do test on that platform anyway - so I'm not sure what you're talking about? You're right to correct someone saying _most_ apps don't work, but it's also not cool to just sweep the WebkitGTK issue under the rug and pretend it's not an issue at all. It's bitten plenty of people who build on Tauri.

Additionally, the issues people find with WebkitGTK/Tauri aren't always web related, usually moreso Linux related (weird blank screens, issues with rendering certain stacked items, etc).

Gnome Web sucks, it has shitty defaults and a horrid performance. But luakit/vimb can be really fast. Luakit even ran under a netbook. Single page bound, ok,, but not bad from an n270 with 1GB of RAM.

> Luakit even ran under a netbook. Single page bound, ok,, but not bad from an n270 with 1GB of RAM.

I have one such device and Firefox and Chrome also run on it. They're slow but still usable.

But the thing is, if you're using Tauri, you control the web app. So you need to test it on WebkitGTK, I guess that's the extra burden?

Have you actually tried deploying to webkitgtk ever?

Good luck "testing" your video conferencing app on webkitgtk - it doesn't support webrtc! It is still useful to test your error page I suppose.

Note that this is one example among many of missing features, bugs and/or horrible performance.

Here's a preview: no notifications, no :has, no TLA.

(Not blaming the epiphany devs for the situation here to be clear)

The thing is, Firefox/Gecko isn't embeddable (probably the one of worst tech blunders ever). I wonder if Tauri could wrap around Blink, instead? Then your app could just ask for Chrome to be installed.

The modern Qt web view component, QtWebEngine, is actually Chromium-based. So you could imagine a Tauri which uses Qt instead of GTK and uses QtWebEngine as its renderer instead of WebKitGTK.

Ahem https://github.com/tauri-apps/tauri/discussions/8426

That's a significant amount of workaround for something that "just works" elsewhere (and if I'm reading correctly doesn't work under Wayland).

It proves what everyone knows: that there's no reason WebRTC can't work in Tauri/Linux environments.

It also proves the point here: there are legitimate issues with the system-provided webview approach that are not always apparent.

Gnome Web sucks, it's has shitty defaults and a horrid performance. But luakit/vimb can be really fast. Luakit even ran under a netbook. Single page bound, ok,, but not bad from an n270 with 1GB of RAM.

Chrome has by far the most consistent cross-platform rendering.

[dead]

[deleted]

I am a web developer and haven't used Tauri or Electron much yet.

I am wondering why rendering differences between different platforms are such an issue? When building web apps, you face the same challenges, so I would assume it wouldn't be much different.

The promise of Electron is the version of chrome you develop on is the version that ships with your app. If it looks right on your machine, it looks right on whoever is running it. This is much nicer than when doing web development and deciding which browsers/browser versions to test and support.

Tauri does not bundle chrome with your app. This makes the bundle size much smaller. But the tradeoff is you end up rendering in whatever the default web view browser is. On Mac this will be some version of Safari (depending on MacOS version), and on Windows it will be some recent-ish Edge thing. Tauri actually has a nice page breaking this down: https://v2.tauri.app/reference/webview-versions/

This also means that a new OS release can change how your app is rendering, so a user can conceivably have a UI bug appear without updating your app.

Does anyone actually choose to ship an Electron app instead of a web app in order to benefit from UI consistency? Most Electron apps I've seen either share a codebase with a web app that's also made available (so the codebase still has to be tested cross-browser), or else can't be web apps because they need full filesystem privileges or otherwise don't work with the browser's security model.

Yep, we use Electron specifically because it gives us a locked-down version of Chromium with a consistent WebGPU implementation. Without that, we're stuck dealing with whatever browser version the user happens to have, and that completely wrecks the stability of our GPU pipeline.

Okay, fair, none of the apps I had in mind make much use of WebGPU and it stands to reason that browser diversity might be a bigger problem with a very complicated and relatively less mature API like that one.

Yes, it is the biggest selling point. You almost never need to debug user/platform specific UI bugs -- if the app starts, it will render the same. There might be some GPU issues, but that's not due to the framework itself.

Electron apps often fully share codebase with the Web apps, so on the app backend you implement native functionality and communicate with your app via IPC.

I would expect most trouble to come from complicated features like the audio stack or canvas, as well as system integration, not aesthetics.

That does not answer the question.

The question is not if Electron feels better for developers because it renders consistently.

The question is if that matters. Is it a big issue? Does any user actually care?

I care, for sure. Electron apps have been better than tauri apps so far, from what I’ve used.

Web developers building web apps for web browsers typically do not test cross browser compatibility.

They build in Chrome and test with Chrome and then the test of the week they whine about Firefox and Safari.

Some of us still do, apparently the newer generation no longer does, putting to waste all our effort for Web freedom, it is ChromeOS now, Google suceeding where Microsoft failed.

If true then Chrome is truly the new IE because that's exactly what they used to do with IE.

Web apps pretty much by definition don't do the kinds of things that desktop apps want to do. In the rare cases where they are actually on par feature-wise, it's just as much headache to support all the browsers in use, it's just that the functionality bar is so much lower on average.

I had the exact same experience and switched from Tauri 2.0 to Electron after a month.

No only were there UI inconsistencies, but Safari lags behind chrome with things like the Popover API and the build/codesign/CD ecosystem for Tauri is incredibly scattered.

When I was using it, IAPs were still not really an option for Tauri or at least I could not find any docs or resources about it.

Same here. We went with Electron mainly for consistency and stability. The larger bundle size wasn’t an issue for our particular project, so the decision was pretty straightforward.

> Have you run into any cross platform UI bugs

Of course not, it's only for Mac. If they were to support Windows and Linux, they probably would not have published this post.

Cross-platform UI is hard, even harder if you want to keep almost the exact same UI, same feature set across platforms, and potentially an online version. People moved from native applications to Qt to web stack for a reason.

Saying this as someone who works at a company that develops cross-platform desktop application that has millions of users. I can't imagine what my job would be like if we were using any other solution.

Not sure what you're saying; Tauri uses the native web view, it still ends up rendering a website. The differences in UI toolkits don't matter.

Just read other replies, if you use the web platform in any capacity, you end up with tons of hardly reproducible issues, and it seems on Linux performance is bad no matter what.

Chromium is superior to the native web view unless you have latest version of Windows or Mac.

I don't even need to refute the point myself -- plenty of comments under the parent thread have already pointed out problems with Tauri.

Do you mean the actual rendering of html/css when you say rendering differences? Or are you referring more to differences in JS support?

Pure html/css rendering. JS support could be addressed trivially in the build system by transpiling anything unsupported by the oldest likely target.

Not all JS APIs are fully pollyfillable, especially ones that are part of the Web platform rather than ECMAScript. I dunno if it's a bigger problem overall than HTML/CSS, probably not, but it's not consistently trivial.

I'm curious how Tauri causes different views on different platforms because Tauri frontends are websites and if websites function same way on platforms so should the apps. If websites use something to hide browser differences so should the app developer on Tauri.

[dead]

You’re post honeymoon.

“We moved from X to Y and were so in love.” posts are often postcards from the honeymoon.

We love it here. We're staying together forever

The documentation for Tauri V2 was straight up trash when I last used it about 6 months ago. The project is very promising, but it was a huge pain getting some things to work with nothing to reference.

[deleted]
[deleted]
[deleted]

I guess the Web would be all much better if ChromeOS Platform was everything that remained, who needs standards and multiple vendors.

It's much more convient for developers for there to be a dominant to open source browser engine. Open source reduced the need for these standards and multiple vendors. See what happened with how Linux largely replaced the slew of UNIXes. The ability for everyone to contribute to a single project paired with the ability to customize it where needed to suit the product you are building has shown to be a winning model.

Chromium may be open-source, even free software, but only in letter, not in spirit. Google has total control over it, and will force any changes it likes. Developer community can only abide.

>but only in letter, not in spirit. Google has total control over it

This applies to every open source project. The owners control what will be merged upstream and the direction the project will go in.

Ask BSD guys and girls how they feel about that.

Monocultures are great, as long they are the one we bet on.

Well due to it being open source their niche operating systems were able to run Linux software via a compatibility layer and Linux drivers making them better.

Microsoft was right all along after all, what a waste of money in lawsuits, monoculture for the win.

Likewise I guess there is no problem that game developers mainly care about Windows, Proton is open source, so no big deal, why bother.

>monoculture for the win

The browser is the actual product. An open source browser engine lowers the barrier of entry of creating new browsers.

>Likewise I guess there is no problem that game developers mainly care about Windows, Proton is open source, so no big deal

Which is why Valve recommends game developers to target Windows and use Proton for compatibility. Having one platform to target simplifies developers lives. Before developers were making bad ports to Linux because they did not have the resources to properly support another tech stacks. The value of developers being able to target a single platform can not be understated.

Though this is fundamentally a different situation as the leading implementation is closed source and is more capable.

There will be no browsers left, likewise the Year of Desktop Linux will never come, cursed forever to emulate/translate other platforms, ChromeOS and Windows, so that it can have any kind of applications, pretending to be "native".

What do you mean? Chrome, Edge, Opera, and Brave are all different browsers that share the same browser engine. The year of the Linux Desktop didn't come, but the year of the Linux phone did come with Android becoming the most popular operating system surpassing desktop operating systems.

>pretending to be "native".

The code is native. Just because something uses a library to call platform code it doesn't mean it isn't native. By that logic programs that use qt are not native because they use a cross platform api.

Effectively killing the Web freeddom, it is all about ChromeOS Computing platform.

The Linux kernel is an implementation detail on Android, there is nothing about Linux exposed as official userspace API.

Any use of Linuxisms on Android apps is done at user's own peril and possible kick out of PlayStore.

I don't see how it would kill web freedom. If anything the reduction of duplicate work means that the web can evolve faster to better compete against its competitors to stay relevant and attractive for developers to target and support.

>The Linux kernel is an implementation detail on Android

The kernel is such an important part of an operating system, you can't really ignore it as a developer even if technically it may be an implementation detail.

>Any use of Linuxisms on Android apps is done at user's own peril and possible kick out of PlayStore.

Sure, but Linux's ABI is stable and even in a world where things are moved to Zircom starnix was made to support that same ABI.

I did the same thing with one of my projects. I built a simple webcam viewer that is optimized for USB microscopes as I couldn't find anything out there for this purpose. Basically all of the functionality was implemented in the renderer. As I was planning for App Store submission, I realized that a 500mb webcam viewer might not be the best thing. I decided to port it to Tauri V2 and got it down to about 15mb.

What is the difference between Tauri and Electron. From what I understand both use browser for rendering, except electron ships the whole browser while Tauri use the browser already there on the system.

That's part of it, but also Tauri uses Rust on the backend while Electron uses Node. Electron is way more mature with a larger developer community, but Tauri keeps gaining momentum. If memory safety, bundle size, and performance are important to you, Tauri is a nice choice. Electron is not bad but there's a reason there are so many new players.

The main significant difference, Electron bundles it's own version of Chrome which means you have very few cross platform issues when shipping Mac/Windows/Linux. This trades off a few hundred meg for consistency in rendering.

Tauri uses the OS engine which means Windows uses Edge presumably and Mac uses Safari's Webkit so you're going to have rendering differences and feature differences there.

> If memory safety

But Tauri is just a wrapper around WebKit, which is written mostly in C++.

Yes, but it is far more tested, fuzzed, studied and battle hardened than your app code will ever be. So in the grand scheme of things it isn't a high risk for stability or security.

Yes, it would be nice if the full stack is memory safe, but that isn't a good reason to not write your own code in a memory safe language.

It has very easy binding for your own rust code, so anything you actually care about (your own code) will be memory safe.

Tauri uses native WebView (can be very outdated in old OS versions) and compiles to native machine code. Electron bundles full Chromium (rendering engine for HTML/CSS+V8 for JS) AND Node.js for your app code.

Honestly if there was an Electron without Node.js which would use literally any compiled language (although Rust is probably too low level), it would've been more tolerable.

Exactly. Electron ships with a copy of chromium in every app, while Tauri uses the native WebView of the operating system.

Electron is also way more mature, but Tauri is improving.

Out of interest, how did you stream the video data to the frontend?

I used MediaStream, which is part of the standard Web API

https://developer.mozilla.org/en-US/docs/Web/API/MediaStream

That's pretty sick. Nice work. What's it called? We're working on the app store submission this week.

I ended up calling it Microscopic View. I made a webpage for it and everything.

https://microscopic-view.jgarrettcorbin.com

I got tied up with other projects while I was trying to navigate the submission process (it was my first time), so it's not up yet, but I'd be happy send you a build if you want to check it out.

I’d love to see something like this optimized for low end Android - old tablets are almost free and otherwise useless even for web browsing.

Also, what is your recommendation for finding a cheap usable microscope? My brief forays to aliexpress have just resulted in frauds and trash.

https://plugable.com/products/usb2-micro-250x/

This is the one I use. It is surprising good for the price. It just behaves as a regular webcam.

I primarily use it for micro soldering, so your mileage may vary, but it is very good for the price. I got it on Amazon where I believe they have an official store.

I'm neither a Mac uset nor is our team exploring a Rust rewrite ... however I appreciate this post. This is what I hope for from "Show HN", a just long enough summary of technical tradeoffs required to solve a real-world problem for a relatable small business (admittedly that part is an assumption). Thank you for sharing your experience.

Happy to share the experience. It was something we debated for a long time. Rebuilding something that is already kind've working is daunting, but in this case we are happy with the results.

It would be great to see an up-to-date benchmark comparing modern cross-platform frameworks like Tauri, Flutter, Electron, React Native, and others.

Key metrics could include:

- Target bundle size

- Memory usage (RAM)

- Startup time

- CPU consumption under load

- Disk usage

- e.t.c.

Additionally, for frameworks like Tauri, it would be useful to include a WebView compatibility matrix, since the rendering behavior and performance can vary significantly depending on the WebView version used on each platform (e.g., macOS WKWebView vs. Windows WebView2, or Linux GTK WebKit). This divergence can affect both UI fidelity and performance, so capturing those differences in a visual format or table could help developers make more informed choices.

Here's a great comparison, updated two weeks ago. https://github.com/Elanis/web-to-desktop-framework-compariso...

Electron comes out looking competitive at runtime! IMO people over-fixate on disc space instead of runtime memory usage.

Memory Usage with a single window open (Release builds)

Windows (x64): 1. Electron: ≈93MB 2. NodeGui: ≈116MB 3. NW.JS: ≈131MB 4. Tauri: ≈154MB 5. Wails: ≈163MB 6. Neutralino: ≈282MB

MacOS (arm64): 1. NodeGui: ≈84MB 2. Wails: ≈85MB 3. Tauri: ≈86MB 4. Neutralino: ≈109MB 5. Electron: ≈121MB 6. NW.JS: ≈189MB

Linux (x64): 1. Tauri: ≈16MB 2. Electron: ≈70MB 3. Wails: ≈86MB 4. NodeGui: ≈109MB 5. NW.JS: ≈166MB 6. Neutralino: ≈402MB

The benchmark also says Tauri takes 25s to launch on Linux and build of empty app takes over 4 minutes on Windows. Not sure if those numbers are really correct.

A few months ago, I experimented with Wails and Tauri on Windows. The builds did indeed take unreasonably long with the Rust option and were way faster with Go, no idea why but I ditched Tauri because of that since Wails did more or less the same thing.

Did you manage to publish/ship your WAILS app? What was your biggest hurdle with it?

It was an internal app, a GUI to configure a CLI tool in a user friendly manner. For that use case, I essentially built a local SPA with Vue that can also call some endpoints on server side software that we also host. There, the rendering differences between the web views didn't really matter but the small distribution size was a major boon, plus being able to interface with Go code was really pleasant (as is that whole toolchain). No complaints so far, then again, not a use case where polish would matter that much.

I'd say that the biggest hurdle for that sort of thing is just the documentation or examples of how to do things online - because Electron is the one everyone seems to use and has the most collective knowledge out there.

That seems like a perfect use case for a WAILS app, nicely done!

This. Reminds me of Casey Muratori warning people to not trust benchmarks made by random people on the internet.

There’s absolutely no way Tauri apps take 25s to launch. Source: I’ve played with Tauri on Linux. This is an order of magnitude off.

I wonder the reason Tauri does great in Linux and Electron is at its worst is because of Optimization or lack thereof respectively.

I've compared block editors (Notion - Electron, Appflowy - Flutter , etc) to my Qt C++ & QML block editor I've built in my blog post[1] using similar parameters. You might find it a good read.

[1] https://rubymamistvalove.com/block-editor

I'd love to see a comparison like this.

At a previous company we maintained a desktop electron app for Windows and macOS. It was super bloated though and updates with Squirrel were a pain.

We kept the GUI as a web spa app (using Inferno) and wrote two small native apps with C# and Swift that would load a webview and other duties. App download size and memory consumption was reduced by like 90%. We also moved distribution and updates to the app stores of each platform.

It was a great decision.

This is the first time I see someone praising distribution and updates through native app stores. The time to get update to users and the uncertainty, that it gets through the approval process are just some of the reasons. I know nothing about Squirrel, but what were the things that improved moving to native?

I agree about dealing with Apple but having updates completely solved for us was a huge deal. We just didn't have the resources to solve this properly.

This was an app offered for free to some customers of the company. If the app had been the main commercial product we would have obviously opted for a better solution than distributing through stores or using Squirrel.

Back in 2018 we needed a server[1] that would notify Squirrel for the udpates. Squirrel worked ok on macOS but it was particularly bad on Windows. I don't remember the details... iirc Squirrel installed the actual executable in some weird folder and users would never be able to find the app if they deleted the link from the desktop.

[1] https://github.com/ArekSredzki/electron-release-server

Nice. How long did the migration take?

It was trivial to create the apps with a web view. Then we implemented new features integrated with the OS that we didn't have before. This was back in 2018 so my memory is a bit fuzzy but I'd say in total 2-3 weeks of work. We definitely lost more time with Squirrel and Electron than that...

Honest curiosity — why did you choose a service like Redis over a more straightforward embedded solution like SQLite? In my head Redis seems better suited to distributed solutions but I've never actually built a desktop application so I'm probably speaking from ignorance.

My bet is that he didn't know sqlite exists.

- "Try" on the main LP call to action implies there is an available trial but just leads to a buy page. You really should have a trial, even something like just 1 week.

- Fan of the perpetual fallback licensing. Though $99 is a high barrier but i'm guessing you are targeting more creators/studios vs a more general consumer target (would think more like $20-25 for general consumer).

- You mention performance in this post but not at all on the landing page. The 38 minute video in the minutes would be very important to know for many potential customers. Would want benchmarks on various machines and info like parallel task processing, impact of and requirements around vram, etc. I would want an insight into what processing hundreds to thousands of hours of video is going to look like.

- I am curious how (and shocked) that electron itself was somehow responsible for a processing bottleneck going from 10-14 minutes to 3 minutes. Wasn't electron just responsible for orchestrating work to CLIP and likely ffmpeg? How was so much overhead added?

- I built (but never released) a similar type media search tool but based on transcriptions in Electron and didn't run into many performance issues

- Usually a lot of the motivation for building in Electron in the first place (or Tauri) would be cross platform, why mac only (especially for something like bulk media processing where nvidia can shine)

- I too recently went down the path of hadn't coded in Swift in 10 years and also investigated Tauri. I opted for Swift (for a new app not a re-write of something) and it has been mostly a pleasure so far and a dramatic improvement compared to my last swift app from around 2014 or so)

- If the LP section about active users is true it sounds like you've already found some modest success (congrats). Did you already have relationships/audience around the studio/creator space? Would be interested to hear about the marketing

Appreciate the feedback! We haven't setup the infrastructure for a trial but maybe in the future.

That's cool you built a similar tool - what kept you from releasing it?

Plan is to ship a Windows and Linux version in the next few months if there's enough demand.

We've gotten our users through various launches on HN and reddit with some minimal linkedin promotion. It's been mostly word of mouth, which has been very promising to see.

Re: the electron and video processing performances - there's a lot to dive into. I don't claim to be an Electron expert, so maybe it was our misuse of workers that created a bottleneck. As part of the migration to rust we also implemented scene detection to help reduce the number of frames we indexed and this helped reduce processing loads a lot. We also added some GPU acceleration flags on ffmpeg that gave us meaningful gains. Batching processing image embedding generation was also a good improvement to a point, before it started crashing our model instance.

Given your destination was the Mac app, why not Swift/SwiftUI rather than Rust/Tauri? Just curious is all.

Thanks for checking it out. The goal is for Desktop Docs to be cross-platform. We've had a lot of requests for Windows support, so we chose Rust to set us up for an upcoming Windows version.

I know it’s probably still not ready for prime time, but I believe the arc browser team was building a windows runtime for swift bec they prefer to use swift everywhere.

I checked it out a while back. It still requires you to write two different UIs in two different frameworks: SwiftUI or Appkit on Mac, and WinUI on Windows. It's just that now you can write WinUI code in Swift instead of C#.

I mean I guess that makes sense as it’d be a pretty big project to port Appkit to windows APIs, but that’s not really a great benefit in terms of cross-platform development. I guess if you’re building something like a browser, you’re going so low level anyways that most of those cross-platform bells and whistles don’t provide much benefit.

Have you started your windows version testing? Any issues you've seen in the differences between browsers tauri would use on the different OSs?

We haven't started testing for windows yet. Are you on Windows? Happy to let you know when we're releasing that version.

App looks great, I'm on Windows so I can't wait to see it!

Thanks! If you're interested in that version, drop us a note: hello [at] desktopdocs dot com. We'll shoot you an update when it's ready!

I wanted to ask the same question. Swift is a fairly nice language and seems to offer many of the benefits of Rust. As another commentator asked, I am also interested in details of integrating CLIPs.

I like the narrative, BTW, on why you needed to port your app.

Thanks! Mentioned it on the other comment - but we're using the Ort crate in rust and bunlding onnxruntime with the app. Definitely considered Swift and I know it's gotten a lot better since I last used it, but cross-platform support was what got us to use Rust over Swift.

As far as porting over goes, we are much happier maintaining the new version.

Curious as well. I’m planning to build a desktop app, haven’t use swift for a long time and I’m pretty new to rust. Tauri looks very promising. I really don’t like electron apps. They’re so slow to start even on lightning fast machines. Thanks for any insights!

After our Electron experience, I wish I had moved out of my comfort zone (JS) sooner. Electron just requires a lot of optimization and you have to be really tight with your imports to avoid loading things you don't immediately need.

The smaller bundle size with Tauri and blazing speed are well worth the effort.

I write a lot of small internal tools to enable my team to work more efficiently. I've traditionally used WinForms, but recently tried using WinUI3. Disaster. Not even close to ready. After that, switched to just using React, uploading to an Azure static site, and adding Tauri if someone really wants an executable. Finding what you're finding--Tauri gets you most of the way there and comes with a much smaller file size than its competitors. Really nice to be able to ship the same code for the web and desktop without shipping Chrome again in the binary.

You should have gone with WPF.

No one in the Windows developer community takes WinUI seriously, we all know the mess it has been since Project Reunion was announced in 2020.

There is a reason WPF regained its status back in BUILD 2024.

For smaller tools, wouldn't iced-rs be a much better solution? It's much lighter than Tauri, and probably runs anywhere as it's self-contained. Initially, the code might seem daunting, and frustrating due to no hand holding nature, but gets easier.

And last time I worked with it, it seemed much easier than previous versions.

If you have complex UI, then, Tauri is better

Yes, it's nice that we can support the same functionality we were supporting in our Electron app using a much smaller binary with Tauri.

[dead]

How did you settle on Tauri, as opposed to e.g. egui? Is it because of the experience with Electron?

I'm dragging my feet about porting my Python Qt app to Rust, because I feel that no Rust GUI library is as rich as Qt and I know that I'll get stuck with that at some point.

Having rich UI library was important for us too. We went with Tauri because we wanted access to web UI libraries.

What are your motivations for porting in the first place?

There are two specific places where Python is not performant. I ran some tests in a few languages and Rust and C++ came out on top, by far. I could write Rust components and access them via Python. I could also use C++ and stick with Qt. Or I could take the plunge with Rust. As this is a personal app with no other users, this is a good place to keep sharpening my skills.

Then I’d say just pick the one you are most familiar with

None of the above, not really. That's why I asked OP what got him settled on Tauri.

+1 for using Tauri over Electron. I've been using it for my MCP marketplace and management app[1], and it's been excellent. Having previously used only Electron, I was surprised by how much smaller the binaries are. Performance also feels noticeably faster.

The only challenge was my lack of familiarity with Rust. Even if you're starting off with a "JS first app", Tauri often requires dropping into Rust for anything even slightly native, such as file system access (eg. managing config files for Claude, Witsy, or code editors), handling client lifecycle actions like opening, closing, and restarting, or installing local MCP servers.

1. https://ninja.ai

I recently built an Electron app (http://dyad.sh/) and I looked at other options like Tauri, but Electron has such a mature ecosystem (e.g. https://www.electronforge.io/) that I was able to ship a cross-platform app in a couple weeks (Mac+Windows) and then adding Linux support was pretty trivial.

The only downside from my point of view is the large installer size for Electron apps, but it hasn't been a big issue for our users (because they will need to download quite a bit of other stuff like npm packages to actually build apps with dyad)

This was my experience as well. We also looked at Tauri but we ultimately decided on Electron due to its prevalence and us not wanting to fight the tide.

I have built something similar but only for photos - https://viroop.com. It is free for now. You have to run this command on the extracted app for MacOS to not show you the dreaded "App is damaged" dialog - `xattr -d com.apple.quarantine Viroop.app`.

My app is built with tauri too. It supports all kinds of images - - JPEG - PNG - TIFF - WEBP - BMP - ICO - GIF - AVIF - HEIC/HEIF and RAW images from various camera manufacturers.

The image reading and processing (for exporting images) is all done on the rust side. These are the crates i use - image - libheif-rs -> to read HEIF/HEIC images - rawler -> to read JPEGs embedded inside RAW images - libraw -> to convert RAW images to JPEGs and PNGs - rexiv2 -> to read image exif data

I use the candle crate to download the CLIP model and generate index pairs for images. I store the faiss indexes in a file on the file system.

I am using the app personally for about a month and it feels amazing to use something you have built yourself.

I hope to add an image editor to the app in the future so that I have my own app management and editing software which is enough for my ametuer needs.

Any kind of feedback would be most welcome.

I love the idea of wha this app is doing, and so many of the options in this space are laggy and painful, so I'm very interested in this rewrite. That said, as a photographer a lot of my media is stored as RAWs, and it's not clear whether that's supported (or perhaps its exclusion from the list of "all major image and video formats" suggests that it's not)?

Does your product have docs/a support forum/other place these kinds of details would be covered?

RAW files aren't supported yet, but we plan on including them. We're also working on a support page, I'll share that here when it's out.

Thank you! Looking forward to the new features, I'll keep an eye out.

Why embed Redis ? What feature do you need from Redis so bad that you need to embed it? Maybe you could just use Rust and have something performant without adding complexity?

Their vector search modules are the reason I embedded it. I tried sqlite and wasn't getting good search results with the vss extension. Maybe I wasn't configuring it correctly. Redis was just better than the alternatives (qdrant, sqlite, duckdb with vector search).

Are you using Flat indexes? If so they should return the same results provided you are using the same distance function. If you aren't using Flat indexes, there might be more setup, but I'd recommend just using Flat indexes. They are plenty fast on most systems for searching ~1 million vectors (assuming 1024-32 bit float vectors).

If you aren't doing anything crazy you could probably just get away with storing them all in a memory mapped file.

I would love to understand this in more depth. Any chance you could write up what you found?

Or alternatively, why would SQLite be not as performant as Redis in embedded context?

I just wasn't able to get the similarity results to match redis. Probably my own error, but that's why I opted for it instead of SQLite. We'll revisit at some point.

As a C++ developer I shrug at those size numbers, but I’m glad you are a happy with Rust.

OP's sort of being misleading when they say they "rewrote it in Rust", IMO. They're still using HTML and JS for their UI, rather than a native solution. The size/perf improvements mostly just come from using the OS' native engine instead of bundling Chromium.

As a rust developer, I do too! Those are not representative of rust GUI/3D etc desktop applications. I've made some pretty complicated stuff with 2D and 3D graphics, lots of functionality; expected size is 5-15Mb on Windows, and 10-25Mb on Linux. Less if you don't need 3D.

Strip the Linux binaries with strip(1).

What do you think would be more in line with what you've seen in the past? There's definitely room to keep optimizing.

<25Mb for the executable. For the use case the web page desribes, probably <15Mb. It usually comes down to dependencies used.

Interesting. I'd love to get our app size down further, under 15MB would be great.

is there a UI framework production ready yet in rust??? because for Tauri at least you can use JS which is fine

yeah slint, which is most similar to Qt

see https://www.boringcactus.com/2025/04/13/2025-survey-of-rust-...

lol when you click through to this site from HN it redirects to https://upload.wikimedia.org/wikipedia/commons/d/d4/Human_fa...

if you open in new tab or copy/paste in new tab it does not.

Funny for sure, but not novel. JWZ also doesn't let his web site linked from HN. Example link (that only works properly when you copy paste, opinionated rejection otherwise): https://www.jwz.org/blog/2024/06/mozillas-original-sin/

EGUI

iced is the best IMHO

Should we do a c++ rebuild

That has nothing to do with Rust and everything to do with Tauri.

If you use FLTK you can start a binary with no dependencies at 100KB.

You didnt rebuild in Rust. you built a wrapper around a Webview. that webview still uses the same resources as if youd have used webkit directly.

your application is now at the whim of version breaks by the OS browser.

The UI appears simple enough to be implemented in any desktop GUI library. Why did you decide on using web technologies? Is it because of the libraries you are using in the app?

Yeah, familiarity with the UI component libraries was a big driver in using web technologies for it.

Very nice. I've been trying to find an image duplicate detection algorithm/system that suits my use-case for a while. Your app seems promising, however, I'm not willing to pay $99 just to see if it works with my (uniquely challenging) duplicate images.

After realizing there was no demo I was looking for a way to contact you directly with a few sample images, but can't find contact information on the website.

Consider adding a demo and contact info.

Otherwise, the app is looking solid. This seems like a great use of AI.

For free/open source options I would suggest https://dupeguru.voltaicideas.net/ and https://github.com/0x90d/videoduplicatefinder

Thanks for the feedback! Will get a demo video and contact info up shortly.

De-duplicating images is on our roadmap. Shoot me your contact info at hello@desktopdocs.com. Would love to see if we can help.

> demo video

I would want to use a demo version (could be with limited functionality) before paying $99 upfront! Not a demo video...

This whole thread reads like it is really bad idea to use webtech for desktop applications.

I agree if your intention is to only make desktop applications. But if you have in mind a hybrid app or a team with web experience only who are ok with making compromises on performance and/or UI, it's a tradeoff that people are right to make, imo. It's of course not optimal, but that's engineering for you.

[deleted]

Tauri uses webtech.

What part?

Isn't such app best implmented with some cross platform framework like flutter? It has support for all major desktop OSes and at leqast the examples run very smooth.

I evaluated Flutter for my app before deciding to go with Tauri and wrote about it on my blog: https://arboretum.space/blog/why-arboretum-chose-tauri .

The short version is that Flutter's lack of rich text editing solutions at the time made it a non-starter. It's a common problem in the Flutter ecosystem from what I've seen, there's often 0 or only 1 quality package for many "advanced" desktop use cases.

> many "advanced" desktop use cases

I've found that the GUI library I tried (fyne with go) was mobile-first, so some desktop things e.g. file-open dialogs didn't have the functionality I expected (the "dialog" was actually drawn within the same window as the application window). Flutter is mobile first too IIUC.

Outside of Qt, languages like rust and go don't have a good solid desktop GUI development option.

I've heard good things about Flutter! We briefly considered it but just opted for Tauri out of preference.

You shall hear also about dark side of Flutter. It reimplements the whole UI rendering layer and UI components, so you lose all the flexibility, accessibility etc. of native components. On the other hand with Tauri or React Native you get browser or native OS components.

dont like the sound of reimplementing the whole UI components....

You don't have to reimplement them yourself, they already come provided with the same components as on the web, like buttons, forms etc (and there are also custom UI frameworks like forui which clones shadcn/UI if you're familiar with that). The point being, Flutter apps can run much smoother precisely because they reimplement everything rather than dealing with the legacies of HTML and CSS. Since your UI seems fairly simple to make in any UI framework, you could take a look at Flutter as well.

I do so with Rust also with the package flutter_rust_bridge which works great, I'm working on a mobile app that also simultaneously works on web and desktop when I tested it, even all the Rust parts.

> Flutter apps can run much smoother precisely because they reimplement everything rather than dealing with the legacies of HTML and CSS

Maybe in some cases, but I kind of doubt this statement in general. I just tried a Flutter demo from their official site and text selection doesn't even _work_ correctly.

https://flutter.github.io/samples/web/simplistic_editor/

I'll copy-paste a few lines of the example sentence, double click on one of the middle lines to start selecting by word (which it doesn't seem to even do), and then highlighting starts on the top line instead of the line I selected.

In general the flutter apps always feel janky and second-class to the platform they're on, because they never fully implement the exact behavior of each platform they run on.

I didn't see 'highlighting on wrong line', but double-click a word not selecting the word is annoying (Firefox on Windows11)

Inspiring! It seems Electron has proved it's value for the v1 MVP anyway. And then Rust has cemented the already proven functional value.

I don't know a technical details but maybe Sqlite would be the best next step of slim down?

I noticed screenshots on the page are displayed croped on Chrome Android (yeah I know).

Congratulations on the rewrite and launch!

1. Did you consider WAILS(Go)?

2. Did you consider ColPali ?

3. Are you planning to launch for other platforms (Linux/Windows) if so how are planning to achieve self-updates & signing the binary to prevent false detections from AV.

Thank you.

Just curious did you guys give any thought to writing the core processing work in a separate Rust solution that your Electron app could call into?

If anyone does decide to pursue this, you can use napi-rs [0] to write Rust modules and call it from JS. Lower overhead than IPC but you will crash your process if there's an issue in your Rust code.

[0] https://napi.rs/

We considered it and decided all the wrangling wasn't worth it. Briefly also thought about adding in a python process to interact with our ML models, but figured it was worth streamlining this all into a single language/framework vs. creating frankenstein's monster.

How much of it was the UI, and how much of was the non-ux code? Could've you achieved many of the same gains by creating a local backend in rust for things like indexing and leaving the UI as a more lightweight electron app while saving a lot more time in a rewrite? How much dev time was the UX rewrite compared to the backend rewrite?

The UX was a small part of the re-write. Since we started from scratch we had a blank slate and figured we'd go all in on Tauri/Rust/React vs. trying continue with Electron in the mix.

Why did you bundle Redis and not use one of the many key value libraries available for Rust (or even sqlite)?

For vector search we couldn't get my results to return meaningful entries. My preference would have been to use sqlite. Any others you recommend besides sqlite with the vss extension?

You could split it up in two separate entities. For vector search there's a myriad of good Rust projects. I've personally used:

- https://crates.io/crates/lancedb - https://crates.io/crates/usearch - https://crates.io/crates/simsimd

search and simsimd are fast and lightweight, but I'd advise to use lancedb if you're a bit new to Rust as the other two are a bit trickier to handle due to the C dependency (e.g. usearch needs Vec::with_capacity and will otherwise crash, etc).

And then, you take the result of this query and can combine it with a sqlite `in` query.

Or you use SQLite with a vector search extension: https://crates.io/crates/rig-sqlite

How are you searching with redis?

Aren't vector searches usuaully just like nearest values with some distance calculation? Are they not all implemented the same way?

Came here to post this. So curious about the choice to NOT use sqlite.

Would have loved to use sqlite if I could get good results. Maybe I botched the implementation.

Creator of sqlite-vec here, happy to help debug anything if you do attempt to try out SQLite vector search again.

You mentioned "VSS" in another comment, which if that was my sqlite-vss extension, I more-or-less abandoned that extension for sqlite-vec last year. The VSS extension was more unstable and bloated (based on Faiss), but the new VEC extension is much more lightweight and easy to use (custom C vector functions). The new metadata column support is also pretty nice, and can be used for push-down filtering.

That being said, sqlite-vec still doesn't have ANN yet (but can brute-force ~10k's-100k of vectors quick enough), and I've fallen behind on updates and new releases. Hoping to push out a desperately-needed update this week

link: https://alexgarcia.xyz/sqlite-vec/

And there are a few other SQLite-based vector search extensions/tools out there, so the ecosystem is general might be worth another look.

Hey Alex - thanks for commenting. I'd be interested in giving this all another look. Like we mentioned in the post + comments, We went with redis because that's what got us the results we wanted. I'm open to revisiting our vector storage if it spares us needing to manage a separate db process.

Any effort to get rid of the Electron bloat is great.

May I ask if you had tried Neon (ostensibly a Rust-Electron bridge)? I was curious if perhaps Neon had failed you or something..

What an amazing achievement. From the outcome, it sounds like all the hardwork has paid off. Congratulations :)

How have the users perceived the new version so far? Are there positive feedback? Any new complaints due to the parity issues? Or in general, how is your team measuring success of the UI? From the post, it sounds like the users have a way to provide feedback and your team has a way to engage with them, which is wonderful. So I'm curious to learn.

How do you run CLIP in Rust? I'm still not sure what the best inference stack on Rust is --- I tried onnxruntime bindings in Rust a few years ago and it was rather finicky to work with.

We're using the onnxruntime bindgs with the Ort crate. Our biggest challenge was just bundling the onnxruntime into the app and making sure everything was signed, etc.

I heard about burn [1] and candle [2] recently and they sounded interesting.

[1] https://crates.io/crates/burn

[2] https://github.com/huggingface/candle

Interestingly, burn supports candle as a backend.

Nice. I'll try to test these out against our current implementation.

Tauri, from my experience, can be extremely frustrating.

Yeah there are definitely hang ups along the way. It's a big of wack-a-model but thankfully we've had an easier time developing with Rust than on Electron.

> I'm still not sure what the best inference stack on Rust is

I was just looking into this today!

The options I've found, but yet to evaluate:

- TorchScript + tch = Use `torch.jit.trace` to create a traced model, load with tch/rust-tokenizers

- rust-bert + tch = Seems to provide slightly higher-level usage, also use traced model

- ONNX Runtime - Convert (via transformers.onnx) .pt model to .onnx encoder and decoder, then use onnxruntime+ndarray for inference

- Candle crate - Seems to have the smallest API for basic inference, and AFAIK can load up models saved with model.save() without conversion or other things

These are the different approaches I've found so far, but probably missed a bunch. All of them seem OK, but on different abstraction-levels obviously, so depends on what you want ultimately. If anyone know any other approach, would be more than happy to hear about it!

There's also the burn framework but there are a lot of tradeoffs to consider. It's neat for wgpu targets (including web) but you'll need to implement a lot of stuff.

Candle is a great choice overall (and there are plenty of examples) but performance is slightly worse compared to tch.

Personally, if I can get it done with candle that's what I do. It's also pretty neat for serverless.

If I can't, I check if I can convert it to onnx without extra work (or if there is an onnx available).

As a last resort, I think about shipping torchlib via tch.

Great resources, thanks. I'll look into the other packages and compare against our onnx runtime setup.

Great work, I can imagine how interesting the migration went. Why are you using Redis if I may ask? Was something like sqlite not enough? What are your biggest challenges with Tauri?

I also work with an Electron app and we also do local embeddings and most of the CPU intensive work happens in nodejs addons written in Rust and using Neon (https://neon-rs.dev very grateful for this lib). This is a nice balance for us.

Thanks! We went with Rust because we weren't able to tune sqlite with a vector search extension to give me the results we wanted. I'm sure it's possible to use it instead of Redis but that's an optimization for another day. I'll check out Neon.

I’ve been trying to build a desktop app in Rust+egui but as a Rust newbie (and desktop app newbie in general) I’ve found it really difficult to learn so many concepts at once. My work centers around mechanical engineering analysis tools which require high perf backend and data visualization on the front end. With Tauri did you find it difficult to maintain multiple stacks (rust, js, html, etc)?

> Redis for vector storage ... Redis bundling nightmares

Im super curious why you picked Redis over something more typical (SQLite springs to mind)

What was the advantage of doing this that made it worth the pain?

I did some prototyping with SQLite + VSS extension to handle embeddings and wasn't getting the same results as using the Redis search modules. My preference would be to run something that doesn't require a separate for the DB, but it still needs some tuning. Redis gets the job done but there are likely simpler solutions that could give us comparable performance that we'll discover.

You could have just write the performance critical part of your Electron app using C++ and native node module and achieve the same performance improvements. You actually did it, but using Rust instead of C++.

This seems like the type of thing that LLMs would be great at, since you already have a fully specified application (all requirements and details worked out). Has anyone attempted something like this?

Yes, we rewrote our Java desktop app into Typescript/Electron with the help of LLMs and we had a POC ready in a day, then had feature parity / bugs squished in a week.

OT but I would love to have a trial to test this before purchasing. But nice product idea I have this kind of issues dealing with my photos and videos.

And how come you don't charge any VAT or GST ?

  > Do you offer refunds?
  Once you download Desktop Docs it's yours forever, so it's non-refundable.
Hum it's not really common to not offer refunds for software licenses. And you might have chargebacks anyways.

Is that macOS and Apple Silicon only? In this case indeed it's a no-brainer not to use Electron (why even use it in the first place?). If you ever want to support Intel processors, Linux and its many variants and Windows that will be a different story.

I'm curious how you're implementing the image similarity matching. I recently reverse engineered the Apple Neural Hash model and wrote an API to use it in my app for doing image similarity calculations. I found it to be extremely quick compared to some of the other more computationally intensive methods that I was trying to use before.

We're using redis vector modules for cosine similarity. I'm sure there's more to optimize there. Your project sounds cool. How'd you reverse engineer apple's model?

> Today the app still uses CLIP for embeddings

Have you investigated multimodal embeddings from other models? CLIP is out of date, to put it mildly.

For sure. We've been prototyping embedding a vision model in desktop docs, but just needs more dev time to be stable. Went with CLIP for parity, but we are looking to upgrade soon. I tried Siglip and wasn't impressed. Do you know other open-source image embedding models you'd recommend?

nomic-embed-vision-1.5 (https://huggingface.co/nomic-ai/nomic-embed-vision-v1.5) is alignable with nomic-embed-text-1.5 for multimodal retrieval and implements some more modern LLM improvements, although it doesn't solve some of the problems CLIP has.

Given the importance to your business, it may be worthwhile into finetuning a modern native multimodal model like Gemma 3 to output aligned embeddings, albeit model size is a concern.

I love Gemma. I use it on LM studio and am working on getting it into Desktop Docs. Thats for the nomic link. I'll do some testing...

Thank you for sharing such a rare experience. I'm also looking into building a Tauri production app, and I'm curious about how you handled authentication. Are there any helpful documents or resources you could recommend?

Would you please describe your adoption of LLMs in achieving this task in detail? For example, specific tooling at code-completion, web-chat and any other modalities?

Any lessons learned, particularly to leveraging LLMs to complete this transition could give a boost to people contemplating leaving electron behind or even starting a new project with Tauri.

Really like your approach! Local-first and clean UI makes a big difference. We're building a social media tool with similar values—more account support and personal profile compatibility. RSS deserves this kind of revival. Keep it up!

Very nice!

What were your most valuable resources when moving from Electron to Tauri?

Are there any guides out there on the equivalencies between the two systems?

Tauri isn't really fully cross-platform in the same way as Electron due to the issues with webkit-gtk, etc. though.

What kind of issues are those? We want to support Windows soon. With Electron we had some cross-platform issues where our bundled binaries wouldn't run reliably in different platforms (even when the binaries were bundled and loaded for those specific platforms).

I have an issue where I have two canvases that are overlapping and only WebKitGTK (not even just WebKit) just randomly stops showing one of the canvases.

Strange. From the comments here sounds like there are a lot of issues with Tauri's UI being inconsistent.

Just rendering a grey screen on Nvidia - https://www.reddit.com/r/tauri/comments/16tzsi8/tauri_deskto...

It's probably okay on Windows though as the backend is different, but that's part of the problem.

I see

Looks awesome. I work closely with Multimodal search and have had trouble porting CLIP to ONNX and other formats due to the lack of multi-head attention operators. Are you using Python for the CLIP inference, or did you manage to port it to a format hostable in a Rust or C/C++ inference runtime?

Yes, we were able to port the CLIP model to work with ONNX Runtime for inference

May I ask, which version or ORT are you using? Were the outputs identical to PyTorch outputs for the same image?

> The trickiest part of the migration was learning Rust. LLMs definitely help, but the Rust/Tauri community just isn’t as mature compared to Electron.

Can you recommend useful resources?

Why Mac specifically? Context: I do a lot of GUI application work in rust, and have never had to make a program OS-specific. There are quirks with CUDA, and CPU architecture if using SIMD, but they can be feature-gated out, or detected at runtime.

Desktop Docs needs a GPU to work well, so we started with the Silicon chip Macs since they are ideal candidates to handle the workload of indexing an entire media library.

> Indexing files is faster: A 38-minute video now indexes in ~3 minutes instead of 10-14 minutes

can you explain how indexing is done and how rust helped in this case?

Would you be willing to upload a basic template repo of your architecture? I'm very interested in this, and would love to see how you organized your project.

Another question - how long did it take for you to rewrite your app?

They note in-post that it took ~2 months

Launch post:

Show HN: I made a Mac app to search my images and videos locally with ML

https://news.ycombinator.com/item?id=40371467

May 15,2024 | 173 comments

That's it! Almost one year anniversary for Desktop Docs!

I would be willing to pay for this if it were available for Linux! Main use case: searching through my years of screenshots

Awesome! Linux version is on our roadmap and that use case is exactly why we started building this too. I can let you know when that version is ready

I wonder why it wasn't written in Swift. I mean, if you want to make an app targeting a specific platform and willing to invest time into a native language, might as well use the platform's official native language.

Apple only sees developers as a revenue stream to squeeze dry. Investing into Apple-only technologies is getting yourself into an abusive relationship. macOS is still a good platform, but staying away from Swift gives you an escape plan.

There's also no point having a native UI on macOS any more. Apple ruined it themselves by switching to flat design, and making their own UIs and apps an uncanny valley between macOS and iPadOS. There's no distinct native look-and-feel on macOS any more.

The people who make Microsoft Teams should learn from this.

Reminds me of eagle.cool, think they will add in AI search soon too

What was the most surprising thing you saw or learned in this rewrite process?

The move from Electron helped us remove "native libraries" in Node. Previously we were using Xenova transformers to instantiate the models. This forced us to use multiple package.jsons and added a lot of cognitive overhead to our development.

The move to rust freed us up to focus more on feature development than configs and setup. It was surprising because I thought learning rust would set us back much longer, but the trade-off was worth it.

id be super curious to see if Chroma (written in Rust) can work better here!

”but the app was almost 1GB "

I mean, I knew Electron was heavy, but holy cow that is HEAVY. No wonder that despite CPUs getting fast and RAM being 10x the size it used to be that software keeps feeling slower than ever. So you weren't talking RAM size, you were talking the size of the app itself? 1GB? That used to be the size for AAA games.

Yeah, we were talking about the app itself, it was way too big when we first started.

After this experience, do you think you'll ever build anything in the future with Electron? When do you think Electron is actually the right choice (if at all)?

Now knowing how big Electron is off the bat compared to Tauri, I think I'd have to learn a lot more about how to optimize Electron to build with it again. That said, I think Electron is great for prototyping an idea and quickly getting it out.

Out of the frying pan into the fire.

:just-blaze

please write more about how you embedded redis in either electron or rust.

this is good work & massive. nicely done

thanks for checking it out.

I'd love to write more about bundling redis binaries into these apps soon. There isn't a lot written about it now (at least that I could find) and it was a lot of trial and error to get it working.

i built an electron app for managing massive (millions) photo and video libraries. it took work and creativity (what technical problems dont) but its very performant. node is actually pretty good at io. that is to say electron itself wasnt your bottleneck, but rewriting in another language is pretext to write long form blog posts for seo and marketing purposes

sounds like you built a really useful app for working with large media libraries.

In our case, the bottleneck was related to how big the app was to start and how much we could optimize it to index media files for local AI search.

Interesting, do you have any write up on the app? What does your app do?

the more I do with Rust the more I love this language, it just feels like comming home

The perfect hackernews headline.

:chefs-kiss

Nice work! downloading now.

There is no trial - the "Try Desktop Docs" button just links you to a stripe payment.

Also it's a bit ambiguous if it searches documents? All the screenshots of are of image search, but the features say you can search inside PDFs and docs, though "All Your Files" says images and videos only-

We aren't offering a trial right now, but to answer about docs --> we're testing an update to support searching text and will be releasing it soon. Right now you can search the contents of all your images and videos.

[deleted]

I'm 100% on board for abandoning Electron as often as possible.

should just use flutter from the start

why?

more mature and supported compared to Tauri???

after Electron, flutter maybe comes second for multi platform thingy

seems like a lot of people in the comment disagree. good to know it exists

the same majority people that recommended Electron in the first place????

sometimes following the crowds of wisdom is alright until that same majority of people decide to ship browser engine with apps because they don't want to learn another technology is astonishing

what vision/llm model you use???

For now we're using CLIP. We've also done testing with Siglip and Gemma for a full-blown vision model.

Why bundle redis and not another embedded kv store like lmdb?

$155AUD? You serious?

Beware the greenfield effect.

I don’t want to comment on the technology choices specifically here, but in general the whole “we rewrote our app in X and now it’s better” is essentially a fact of life no matter the tech choices, at least for the first big rewrite.

First, you’re going to make better technical choices overall. You know much better where the problems lie.

Second, you’re rarely going to want to port over every bit of technical debt, bugs, or clunky UX decisions (with some exceptions [1]), so those things get fixed out of the gate.

Finally, it’s simply invigorating to start from a (relatively) clean slate, so that added energy is going to feel good and leave you in a sort of freshly-mowed greenfield afterglow. This in turn will motivate you and improve your work.

The greenfield effect happens even on smaller scales, especially when trying out a new language or framework, since you’re usually starting some new project with it.

[1] A good example of the sort of rewrite that _does_ offer something like an apples-to-apples comparison is Microsoft’s rewrite of the TypeScript compiler (and type checker, and LSP implementation) from TypeScript to Go, since they are aiming for 1-to-1 compatibility, including bugs and quirks: https://github.com/microsoft/typescript-go

Counterpoint: if you rewrite a Rust app, ANY Rust app and turn it into a perfectly rewritten Electron app, it will 100 percent still be shittier, bigger, slower and eat more RAM and CPU.

Possibly but not guaranteed.

For desktop apps UI quality and rendering speed is paramount. There's a lot of stuff buried inside Chrome that makes graphics fast, for example, deep integration with every operating systems compositing engine, hardware accelerated video playback that is integrated with the rendering engine, optimized font rendering... a lot of stuff.

If your Rust UI library is as advanced and well optimized as Blink, then yes, maybe. But that's pretty unlikely given the amount of work that goes into the Chrome graphics stack. You absolutely can beat Chrome in theory, by avoiding the overhead of the sandbox and using hardware features that the web doesn't expose. But if you just implement a regular ordinary UI toolkit with Rust, it's not necessarily going to be faster from the end user's perspective (they rarely care about things like disk space unless they're on a Windows roaming account and Electron installed itself there).

Having worked on a graphical application in rust for a (albeit not a complex one) computers today are fast.. latencies top out at 3 ms with cpu based rendering in an application with just a few rendering optimizations.

The fact that you just draw on the screen instead of doing whatever html parsing / DOM/IR is probably doing it? And doing rendering on the gpu means extra delay in the processing moving from cpu to gpu and being a frame behind because of vsync.

Can you clarify this? Seems to me that no matter how you render your UI, it has to go to the GPU framebuffer at some point.

For any non-trivial case where I can enable GPU acceleration for an app, it's been anywhere from equivalent to much more responsive.

What apps have you experienced delays with by enabling GPU acceleration?

Where it really shows up is stuff like power usage and reliably hitting framerates even on slow machines with big hi-res monitors attached.

Point of information: I believe this project uses Tauri, which actually does use web technology and even JavaScript for rendering, it just does it with the native web renderer of the platform so you're not dragging around a superfluous extra copy of Chrome in RAM for each and every individual app:

"Write your frontend in JavaScript, application logic in Rust, and integrate deep into the system with Swift and Kotlin."

"Bring your existing web stack to Tauri or start that new dream project. Tauri supports any frontend framework so you don’t need to change your stack."

"By using the OS’s native web renderer, the size of a Tauri app can be little as 600KB."

So you write your frontend with familiar web technology and your backend in Rust, although it's all running in one executable.

I am curious if it would be all that much worse if your backend was also JavaScript, let's say in Node.js, but it certainly depends on what that back end is doing.

Comparing Rust with Electron is so weird. One is a language, the other is a lib/framework.

I think it’s clear what stack is typically meant by “an Electron app” and why using a blanket term like this is faster.

However, you could use Rust compiled to WASM in an Electron app, therefore the two aren’t even mutually exclusive.

For the OP's specified case, it is only a little language related. It is more a lib/framework related thing.

Tauri is the thing being implicitly compared with electron. Well worth checking out.

It's not, both can be apps.

This is akin to saying if you rewrite it in Assembly it would be better than Rust. True, but what are the tradeoffs? Why doesn't everyone write it in assembly?

It _would_ be bigger and eat more RAM and CPU. But that does not imply "shittier".

There are parameters like dev time, skills available in the market, familiarity, the JS ecosystem etc that sometimes outweigh the disadvantage of being bigger/slower.

You're pointing out the disadvantages in isolation which is not a balanced take.

> This is akin to saying if you rewrite it in Assembly it would be better than Rust

Not really. Nobody is rewriting GUI apps in Assembly, the reasons are obvious.

All those parameters mentioned are exclusively for developers. End users don't care and will get a worse product when you choose Electron instead of doing it properly.

End users care that they get a product at all. Which they won't if it's too costly to make. There is a balance that is appropriate for each project. Or else we should all be writing machine code by hand.

Rust has been shown by Google to not be any less productive than other mainstream languages though.

Link? I’d love to learn more!

[citation_needed]

> All those parameters mentioned are exclusively for developers. End users don't care and will get a worse product when you choose Electron instead of doing it properly.

A sensible take wouldn't pick one or the other as unilaterally better regarding the abstract context of what a good product is. The web as a platform is categorically amazing for building UIs, and if you chose continued to choose it as the frontend for a much more measurably performant search backend, that could be a fantastic product choice, as long as you do both parts right.

This is something I think a lot of people miss about Rust - outside of slow compile times and personal preference, there is no reason not to choose Rust over JavaScript/TypeScript (unless of course you're working in the browser). It does everything JavaScript can do, but it does it faster and with more stability. At the end of the day, these features pay out huge dividends.

And this isn't Rust zealotry! I think this goes for any memory-safe AoT language that has a good ecosystem (e.g. Go or C#): why use JavaScript when other languages do it better?

Rust's type system gymnastics compared to most languages goes quite a bit beyond preference. I can't see the overlap at all with dynamic scripting languages, two completely different tools for completely different problems.

TS has one of the more gymnastics-heavy type systems, IMO, and I think many if not most JS shops use TS.

TS is gradual though, Rust is all or nothing.

> there is no reason not to choose Rust

Sounds like Rust zealotry to me, followed by a mild attempt to walk it back.

A world of difference between the borrow checker and a garbage collector.

Dev time like cater for ever moving Node stack? (Not sure if it applies here because I'm not familiar with Tauri).

All of that is true, but technological choices aside it doesn’t take away from all of the points made by the parent.

I took that as the somewhat the point, and I think it was insightful. Your app will still be worse, but worse as result of your poor technology choices, not the arguments made here. Put together it may still be a bad move, but you would still get the greenfield effect.

Performance is a feature, I agree. Language choice matters to a degree. Shitty apps can still be written in “fast” languages.

[dead]

Microsoft rewriting typescript tools in Go and getting a 10x speedup? It's wild that they would choose Go for that. And a surprising level of speedup.

https://devblogs.microsoft.com/typescript/typescript-native-...

In my experience, Go is one of the best LLM targets due to simplicity of the language (no complex reasoning in the type system or borrow checker), a high quality, unified, and language-integrated dependency ecosystem[1] for which source is available, and vast training data.

[1]: Specifically, Go community was trained for the longest time not to make backward-incompatible API updates so that helps quite a bit in consistency of dependencies across time.

I have never understood why people want to use LLMs for programming outside of learning. I have written Perl, C, C#, Rust, and Ruby professionally and to this day I feel like they would slow me down.

I have used golang in the past and I was not am still not a fan. But I recently had to break it out for a new project. LLMs actually make golang not a totally miserable experience to write, to the point I’m honestly astonished that people have found it pleasant to work with before they were available. There is so much boilerplate and unnecessary toil. And the LLMs thankfully can do most of that work for you, because most of the time you’re hand-crafting artisanal reimplementations of things that would be a single function call in every other language. An LLM can recognize that pattern before you’ve even finished the first line of it.

I’m not sure that speaks well of the language.

> I have never understood why people want to use LLMs for programming outside of learning

"I have never understood why people want to use C for programming outside of learning m. I have written PDP11, Motorola 6800, 8086 assembly professionally and to this day I feel like they would slow me down. I have used C in the past and I was not am still not a fan. But I recently had to break it out for a new project. Turbo C actually make C not a totally miserable experience to write, to the point I’m honestly astonished that people have found it pleasant to work with before they were available. There is so much boilerplate and unnecessary toil. And Turbo C with a macro library thankfully can do most of that work for you, because most of the time you’re hand-crafting artisanal reimplementations of things that would be a single function call in every other language. A macro can recognize that pattern before you’ve even finished the first line of it. I’m not sure that speaks well of the language."

They are enormously powerful tools. I cannot imagine LLMs not being one of the primary tools in a programmer's toolbox, well... for as long as coding exists.

Right now they are fancy autocompletes. That is enormously useful for a language where 90% of the typing is boilerplate in desperate need of autocompletion.

Most of the “interesting” logic I write is nowhere close to autocompleted successfully and most of it needs to be thrown out. If you’re spending most of your days writing glue that translates one set of JSON documents or HTTP requests into another I’m sure they’re wildly useful.

I don't know which models you are using, but in my experience they have been way more than fancy autocomplete today. I have had thousand line programs written and refined with just a few prompts. On the analysis and code review side, they have been even more impressive, finding issues and potential impacts of changes and describing the intent behind the code. I implore you to revisit good models like Gemini 2.5 Pro. To wit, there was an actual Linux kernel vulnerability in SMB protocol stack discovered with LLM a few days ago.

Even if we take the narrow use case of boilerplate glue code that transforms data from one place to another, that encompasses almost all programs people write, statistically. There was a running joke at Google "we are just moving protobufs." I would not call this "fancy autocomplete."

It comes back to the nature of the work; I've got a hobby project which is basically an emulator of CP/M, a system from the 70s, and there is a bug in it.

My emulator runs BBC Basic, Zork, Turbo Pascal, etc, etc, but when it is used to run a vintage C compiler from the 80s it gives the wrong results.

Can an LLM help me identify the source of this bug? No. Can I say "fix it"? No. In the past I said "Write a test-case for this CP/M BDOS function, in the same style as the existing tests" and it said "Nope" and hallucinated functions in my codebase which it tried to call.

Basically if I use an LLM as an auto-completer it works slightly better than my Emacs setup already did, but anything more than that, for me, fails and worse still fails in a way that eats my time.

> Can an LLM help me identify the source of this bug? No. Can I say "fix it"? No. In the past I said "Write a test-case for this CP/M BDOS function, in the same style as the existing tests"

These are all things I've done successfully with ChatGPT o1 and o3 in a 7.5kloc Rust codebase.

I find the key is to include all information which may be necessary to solve the problem in the prompt. That simple.

I wrote a summary of my issue on a github comment, and I guess I will try again

https://github.com/skx/cpmulator/issues/234#issuecomment-291...

But I'm not optimistic; all previous attempts at "identify the bug", "fix the bug", "highlight the area where the bug occurs" just turn into timesinks and failures.

It seems like your problem may be related to asking it to analyze the whole emulator _and_ compiler to find the bug. I'd recommend working first to pare the bug down to a minimal test case which triggers the issue - the LLM can help with this task - and then feed the LLM the minimal test case along with the emulator source and a description of the bug and any state you can exfiltrate from the system as it experiences the issue.

Indeed running a vintage, closed-source, binary under an emulator it's hard to see what it is trying to do, short of decompiling it, and understanding it. Then I can use that knowledge to improve the emulation until it successfully runs.

I suggested in my initial comment I'd had essentially zero success in using LLMs for these kind of tasks, and your initial reply was "I've done it, just give all the information in the prompt", and I guess here we are! LLMs clearly work for some people, and some tasks, but for these kind of issues I'd say we're not ready and my attempts just waste my time, and give me a poor impression of the state of the art.

Even "Looking at this project which areas of the CP/M 2.2 BIOS or BDOS implementations look sucpicious?", "Identify bugs in the current codebase?", "Improve test-coverage to 99% of the BIOS functionality" - prompts like these feel like they should cut the job in half, because they don't relate to running specific binaries also do nothing useful. Asking for test-coverage is an exercise in hallucination, and asking for omissions against the well-known CP/M "spec" results in noise. It's all rather disheartening.

> Indeed running a vintage, closed-source, binary under an emulator it's hard to see what it is trying to do, short of decompiling it, and understanding it.

Break it down. Tell the LLM you're having trouble figuring out what the compiler running under the emulator is doing to trigger the issue, what you've done already, and ask for it's help using a debugger and other tools to inspect the system. When I did this o1 taught me some new LLDB tricks I'd never seen before. That helped me track down the cause of a particularly pernicious infinite recursion in the geometry processing code of a CAD kernel.

> Even "Looking at this project which areas of the CP/M 2.2 BIOS or BDOS implementations look sucpicious?", "Identify bugs in the current codebase?", "Improve test-coverage to 99% of the BIOS functionality" - prompts like these feel like they should cut the job in half, because they don't relate to running specific binaries also do nothing useful.

These prompts seem very vague. I always include a full copy of the codebase I'm working on in the prompt, along with a full copy of whatever references are needed, and rarely ask it questions as general as "find all the bugs". That is quite open ended and provides little context for it to work with. Asking it to "find all the buffer overflows" will yield better results. As it would with a human. The more specific you can get the better your results will be. It's also a good idea to ask the LLM to help you make better prompts for the LLM.

> Asking for test-coverage is an exercise in hallucination, and asking for omissions against the well-known CP/M "spec" results in noise.

In my experience hallucinations are a symptom of not including the necessary relevant information in the prompt. LLM memories, like human memories, are lossy and if you force it to recall something from memory you are much more likely to get a hallucination as a result. I have never experienced a hallucination from a reasoning model when prompted with a full codebase and all relevant references. It just reads the references and uses them.

It seems like you've chosen a particularly extreme example - a vintage, closed-source, binary under an emulator - didn't immediately succeed, and have written off the whole thing as a result.

A friend of mine only had an ancient compiled java app as a reference, he uploaded the binary right in the prompt, and the LLM one-shotted a rewrite in javascript that worked first time. Sometimes it just takes a little creativity and willingness to experiment.

7.5 kloc is pretty tiny, sounds like you may be able to get the entire thing into the context.

Lots of Rust libraries are relatively small since Cargo makes using many libraries in a single project relatively easy. I think that works in favor of both humans and LLMs. Treating the context window as an indication that splitting code up into smaller chunks might be a good idea is an interesting practice.

I generally have to maintain the code I write, often by myself; thousands of lines of uninspired slop code is the last thing I need in my life.

Friction is the birth place of evolution.

Some people go to camping now and then to hunt their own food and feel connected to nature and feel that friction. They just won't want it every day. Just like they don't tend to generate the underlying uninspired assembly themselves. FWIW if your premise is the code they generate is necessarily unmaintainable compared to an average CS college graduate human baseline, I'd argue against that premise.

I've always found it fascinating how frequently I've seen the complaint about Go re: boilerplate and unnecessary toil, but in previous statements Rust was uttered with an uncritical breath. I agree with the complaint about Go, but I have the same problem with Rust. LLMs have made Rust much more joyful for me to write, and I am sure much of this is obviously subjective.

I do like automating all the endless `Result<T, E>` plumbing, `?` operator chains, custom error enums, and `From` conversions. Manual trait impls for simple wrappers like `Deref`, `AsRef`, `Display`, etc. 90% of this is structural too, so it feels like busy work. You know exactly what to write, but the compiler can't/won’t do it for you. The LLM fills that gap pretty well a significant percentage of the time.

But to your original point, the LLM is very good at autocompleting this type of code zero-shot. I just don't think it speaks ill of Rust as a consequence.

This is akin to saying that you prefer a horse to a car because you don't have to buy gas for a horse, it can eat for free so why use it?

The first cars were probably much less useful than horses. They didn’t go very far, gas pumping infrastructure wasn’t widely available, and you needed specialized knowledge to operate them.

Sure, they got better. But at the outset they were a pretty poor value proposition.

Well it certainly makes error handling easy. No need to reason about complex global exception handlers and non-linear control structures. If you see an error, return it as a value and eventually it will bubble up. If err != nil is verbose but it makes LLMs and type checkers happy.

I have never seen any AI system could explain correctly on the following Golang code:

    package main

    func alwaysFalse() bool {
     return false
    }

    func main() {
     switch alwaysFalse() // don't format the code
     {
     case true:
      println("true")
     case false:
      println("false")
     }
    }
> Go community was trained for the longest time not to make backward-incompatible API updates so that helps quite a bit in consistency of dependencies across time

Not true for Go 1.22 toolchains. When you use Go 1.21-, 1.22 and 1.23+ toolchains to build the following Go code, the outputs are not consistent:

    //go:build go1.21
    package main

    import "fmt"

    func main() {
     for counter, n := 0, 2; n >= 0; n-- {
      defer func(v int) {
          fmt.Print("#", counter, ": ", v, "\n")
          counter++
      }(n)
     }
    }

You're bringing up exceptions rather than a rule. Sure you can find things they mess up. The whole premise of a lot of the "AI" stuff is approximately solving hard problems rather than precisely solving easy ones.

The opposite is true, they sometimes guess correctly, even a broken watch is right two times a day.

I believe future AI systems can make correct answers. The rule is clearly specified in Go specification.

BTW, I haven't found an AI system can get the correct output for the following Go code:

    package main

    import "fmt"

    func main() {
        for counter, n := 0, 2; n >= 0; n-- {
            defer func(v int) {
                fmt.Print("#", counter, ": ", v, "\n")
                counter++
            }(n)
        }
    }

What do you base that prediction on? Without a fundamental shift in the underlying technology, they will still just be guessing.

Because I am indeed experiencing the fact that AI systems do better and better.

It can easily explain it with a little nudge.

Not sure why you feel smug about knowing such a small trivia, ‘gofmt’ would rewrite it to semicolon anyway.

I write code in notebook++ and never format my code. :D

Go is a great target for LLM because it needs so much boilerplate and LLMs are good at generating that.

AFAIK the borrow checker is not strictly needed to compile Rust. I think one of the GCC Rust projects started with only a compiler and deferred adding borrow checking later.

The borrow checker does not change behavior, so any correct program will be fine without borrow checking. The job of borrow checking is to reject programs only.

mrustc also does not implement a borrow checker.

Not that much different than a type checker in any language (arguably it is the same thing).

I have been using various LLMs extensively with Rust. It's not just borrow checker. The dependencies are ever-changing too. Go and Python seem to be the RISC of LLM targets. Comparatively, the most problematic thing about generated Go code is the requirement of using every imported package and declared symbol.

[deleted]

> And a surprising level of speedup.

Not surprising at all; I keep pointing out that the language benchmarking game is rarely, if at all, reflective of real-world usage.

Any time you point out how slow JS is someone always jumps up with a link to some benchmark showing that it is only 2x slower than Go (or Java, or whatever).

The benchmarks game, especially in GC'ed languages, are not at all indicative of real-world usage of the language. Real world usage (i.e. idiomatic usage) of language $FOO is substantially different from the code written for the benchmarks games.

Perhaps "real-world usage" is "… rarely, if at all, reflective of [other] real-world usage …".

Perhaps when you write "idiomatic usage" you mean un-optimized.

It doesn't surprise me at all.

Idiomatic Go leans on value types, simple loops and conditionals, gives you just enough tools to avoid unnecessary allocations, doesn't default to passing around pointers for everything, gives you more control over memory layout.

JS runtimes have to do a lot of work in order to spit out efficient code. It also requires more specialized knowledge from programmers to write fast JS.

I think esbuild and hugo are two programs that showcase this pretty well. Esbuild specifically made a splash in the JS world.

A tooling team selects language widely used in tooling circles - wild, shocking.

Here's the FAQs, where they explain the decision to go with Go and not, say, rust.

https://github.com/microsoft/typescript-go/discussions/categ...

Hejlsberg also says in this video, about 3.3x performance is from going native and the other 2-3x is by using multithreading. https://www.youtube.com/watch?v=pNlq-EVld70&t=51s

My surprise is typescipt is so slow. I have never used it yet, but I think will never too.

At the risk of feeling silly for not knowing this ... why is TypeScript considered a programming language, and how can you make it "faster"?

I have used its it came out so I do know what it is, but I have people ask if they should write their new program in TypeScript, thinking this is something they can write it in and then run it.

My usage of it is limited to JavaScript, so I see it as adding a static typing layer to JavaScript, so that development is easier and more logical, and this typing information is stripped out when transpiled, resulting in pure JavaScript which is all the browser understands.

The industry calls it a programming language, so I do too just because this is not some semantic battle I want to get into. But in my mind it's not.

There's probably a word for what it is, I just can't think of it.

Type system?

And I don't understand a "10x speedup" on TypeScript, because it doesn't execute.

I can understand language services for things like VS Code that handle the TypeScript types getting 10x faster, but not TypeScript itself. I assume that is what they are talking about in most cases. But if this logic isn't right, let me know.

The "10x speedup" is for the compilation step from TS to JS, eg how much faster the new Typescript compiler is, not the runtime performance of the JS output.

Theoretically(!) using TS over JS may indirectly result in slightly better perf though because it nudges you towards not mutating the shape of runtime objects (which may cause the JS engine to re-JIT your code). The extra compilation step might also allow to do some source-level optimizations, like constant folding. But I think TS doesn't do this (other JS minifiers/optimizers do though).

I suspect the particular use-case of parsing/compiling is pathologically bad for JavaScript runtimes. That said, they are still leaps faster than reference Python and Ruby interpreters.

[deleted]

Depends what you mean by slow. The Typescript code was 3x slower than the Go code, and a 3x overhead is pretty much the best you can do for a dynamically typed language.

Languages like Python and Ruby are much much slower than that (CPython is easily 10x slower than V8) and people don't seem to care.

Technically Typescript can't really be slow, since it's just a preprocessor for Javascript, and the speed of its programs will depend on which Javascript implementation you use.

Typescript's sweet spot is making existing Javascript codebases more manageable.

It can also be fine in I/O-heavy scenarios where Javascript's async model allows it to perform well despite not having the raw execution performance of lower-level languages.

I thought that (for example) deno executed typescript natively?

It executes typescript without you compiling it to JavaScript first, it doesn’t make code execution any faster.

This is a good take, so thank you for sharing. Can you please rewrite it in Go?

OK, but you phrase it like it's something bad, while you say it's very, very good in practice. But maybe for different reasons than language change.

We should state: "this is true and really works, just remember language was likely only part of that".

It's rather "embrace" instead of "beware".

I'm a little confused, why beware of this?

Yes and no, Rust is suitable to solve a certain set of problems in a certain way. Or you could say: Rust brings certain qualities to the table that may or may not suit your project.

A hacker blog I read regularly made a challenge about the fastest tokenizer in any language. I just had learned basic Rust and decided why the heck not. I spent 15 minutes with a naive/lazy approach and entered the result. It won second place, where the third place was a C implementation and the first place was highly optimized assembler.

This is not nothing and if I had written this in my main language (python) I wouldn't even have made the top 10.

So if you want a language where the result is comparably performant while giving you some confidence in how it is not behaving, Rust is a good choice. But that isn't the only metric. People understanding the language is also important and there other languages shine. Everything is about trade offs.

[deleted]

> TLDR; rebuilding in Rust was the right move.

Smart choice.

Haven't looked back since

Better do a Google reverse image search before you put fake testimonials on your website:

Alex Chen is also known as „Alexander Hipp“ or „Felix Mueller“

Though the concept is interesting - I don‘t like bullshit marketing like „Trusted by Professionals worldwide“ if I can uncover the real deal within seconds

[deleted]

It's sad to witness how much suffering developers are ready to endure (and make their users suffer) just not to part ways with HTML/CSS/JS stack. The stack that was never designed properly, let alone for modern UI apps.

Flutter would be much better choice for such a desktop app, for example.

On the other hand, they shipped, proved the product, and got their first paying customers with a web stack. And the top HN comment thread is people talking about the downsides of Tauri cross-platform.

It's all trade-offs, unfortunately.

you still can't have multiple windows with flutter, which disqualifies it for building desktop apps.

True. But this is actively being worked on by Canonical: https://ubuntu.com/blog/multiple-window-flutter-desktop

[deleted]

How exactly is it a problem for the typical desktop app?

There is an unfortunate increasing trend to build desktop apps that only live in one window, so it's not really a "disqualification" - perhaps more a "complication".

wxWidgets or QtWidgets if you want a proper desktop GUI app.

Heck, Tcl/Tk could work as well.

My problem with wxWidgets was I kinda wish its API was more elegant

There were also some subtle issues like the child windows leaking a lot of memory in the GTK implementations when idle (something like spamming gobjects).

I wouldn't touch Qt even with a ten foot pole, too much bloated for me.

that's funny, most of the time I don't even want to USE apps written in those because they're dated and janky as hell, must less write in them

How so?

[deleted]

[dead]

[dead]

> Trusted by Professionals Worldwide

Who are they? as far as I see ProductHunt with only 11 votes.

Yeah we didn't get a lot of traction on PH unfortunately and they have a lot of rules about relaunching. As far as users go - it ranges from prosumers to producers at media companies.

[deleted]

tl;dr: Came for the Mac app rewrite reactions, lingered and scrounged for info about the app.

Interesting approach to digital asset management. After watching the demo video I wanted to trial the app.

Wish I could, but it’s purchase only.

[flagged]

Wrong. There is at least slint. But it’s not free.

Slint is free for desktop.

Slint is under three licenses: GPL, or royalty-free on desktop/mobile, or paid for embedded. So you only have to pay if you sell hardware.

[flagged]

[flagged]

It might be a good idea to remove 'One-time purchase • No subscription' from the landing page, as the pricing page instead says 'One year of updates for $99'.

It’s normal for one-time purchases to come with a limit on updates (for desktop applications). A subscription would mean that you can’t continue using the functionality after the subscription is cancelled/expires, but you can continue to use a one-time purchase forever.

One criticism of mobile app stores is that they don’t provide the option of paid major updates, and thereby strongly push adopting a subscription model.

[flagged]

This comment breaks both the site guidelines (https://news.ycombinator.com/newsguidelines.html) and the Show HN guidelines (https://news.ycombinator.com/showhn.html).

Could you please review the rules and stick to them? We'd appreciate it because we're trying for something quite different here.

Did you try rust? Honestly feels like python rather than C if you got mental model internalized.

Mostly because of very rich ecosystem of packages.