That did seem excessive to me as well. I do worry about the DX of trying to work on an app with this. After each edit, I would expect a solid compile time to simply try your work.

I don't think GPUI has it integrated yet, but Dioxus's Subsecond tool [0] implements binary hot-patching for Rust apps which can help alieviate this problem.

The other thing you can do (which is popular in the Bevy community) is to compiile the "core runtime" into dynamic library. Then you don't need to recompile that set of crates for incremental builds.

[0]: https://github.com/DioxusLabs/dioxus/tree/main/packages/subs...

> The other thing you can do (which is popular in the Bevy community) is to compile the "core runtime" into dynamic library. Then you don't need to recompile that set of crates for incremental builds.

I'm curious as to what this means exactly. Are you saying keep the UI stuff in a separate crate from rest of app or ???. And just a separate or an actual dynlib? (wouldn't that imply C ABI? would make it a pain to interface with it)

An actual dynlib (containing the core framework crates that typically dont change between compiles (and which in C world might be installed as precompiled system libraries)).

It doesn't necessarily require C ABI. Rust doesn't make any guarantees about stability of the Rust ABI. But if you compile the app and the dynlib with the same compiler version then it works in practice (and IIRC there are enough things relying on this that this is unlikely to break in future).

That does mean you need to recompile the dynlib when you upgrade the compiler, but that is probably infrequent enough not to be a huge issue. Certainly if your aim is fast-recompiles in response to e.g. ui style changes then it ought to work.

--

A note on the sort of Rust compile times I see for a TodoMVC app using my Rust UI framework (412 dependencies):

- A clean release build (-O3) is 1m 01s

- An incremental (-03) rebuild is 1.7s

- A clean debug build (-O0) is 35s

- An incremental debug build (-O0) is 1s

That's on a 2021 MacBook M1 Pro which is fairly fast, but I hear the M4 machines are ~twice as fast. And that's also without any fancy tricks.

I did some quick research. I knew Rust ABI was unstable, but I didn't realize you could create Rust ABI dynlib and Rust would automatically dynamically link it. For intra-app it would work just fine. Neat. Link: https://stackoverflow.com/questions/75903098/dynamic-linking...

However, I don't see what advantage this gives. You are going to specify that dependency in your Cargo.toml just like any statically linked crate. Anything that would invalidate the cache for a static crate would invalidate it for a dynamic linked crate. Iow, it seems like separate crates are the magic here, not the linking type. What am I missing?

Thanks for the build stats. Those are helpful. I have an M1 Max currently.

UPDATE: Good points below. As a dynlib it would create a boundary for sure (no LTO, etc.). Worth playing with, thx.

> I don't see what advantage this gives

I believe it may "just" be faster link times. Which may seem minor, but link times can often dominate incremental compile times because it's a slow and (at least historically) serial step which is O(total code size) even if the actual compilation is incremental.

See mold's linking benchmarks: https://github.com/rui314/mold. It can be the difference between multiple 10s of seconds with traditional linkers vs <2s with newers ones.

There are few strategies for dealing with this:

1. Is just to use a faster multi-threaded linker. On Linux, lld, mold, and wild on Linux are all much faster than the traditional ld/gold (and the latter two another step above lld). On macOS, the new built-in ld64 is pretty good. Not sure what the state is on Windows: possibly lld is best?

2. Is dynamic linking as above. This seems to be faster even though the dynamic links need to resolved at runtime. I presume because at least the links wholly within the dynlib don't need to be resolved.

3. Is tools like Subsecond (https://github.com/DioxusLabs/dioxus/tree/main/packages/subs...) which effectively implement incremental linking by diffing the symbols in object files.

Even in Dioxus the usefulness is somewhat limited right now though.

(dioxus-7-rc.3)

It usually only works when reordering elements, or changing static values (styles, attributes, etc).

Which, to be fair, does speed things up a lot when tinkering with small details.

But 70%+ or so of my changes still result in recompiles.

Are talking about the "hotreloading" or the "hotpatching"? (There are two separate mechanisms) The hotreloading just does RSX and assets, and is very fast, the hotpatching is a recompile (no getting around compiling Rust code), but it should be a faster one, and in many cases it should be able to maintain application state.

I've been able to get the hotpatching to work for use cases like "extract some UI code" into a new component that didn't exist before.

Note that the hotpatching is not enabled by default, you have to specify --hotpatch when running dx

Ah, thanks for the hint.

I indeed was not enabling it.

I'll give it a try!

Incremental compiles should be fast.