Agree with the other commenters that the title is a bit too dramatic. The content was well written and got the point across.
I still don’t have enough experience to have a strong opinion on Rust async, but some things did standout.
On the good side, it’s nice being able to have explicit runtimes. Instead of polluting the whole project to be async, you can do the opposite. Be sync first and use the runtime on IO “edges”. This was a great fit to a project that I’m working on and it seems like a pretty similar strategy to what zig is doing with IO code. This largely solved the function colloring problem in this particular case. Strict separation of IO and CPU bound code was a requirement regardless of the async stuff, so using the explicit IO runtime was natural.
On the bad side, it seems crazy to me how much the whole ecosystem depends on tokio. It’s almost like Java’s GC was optional, but in practice everyone just used the same third party GC runtime and pulling any library forced you to just use that runtime. This sort of central dependency is simply not healthy.
So depending on your context, it may seem like the whole ecosystem depends on tokio, but if you look at say, embedded Rust, it makes a little more sense.
The system requirements for an async runtime on a workstation processor compared to say, an RP2040 look very different. But given the ability to swap out the backend, when I write async IO code for a small ARM M0 microcontroller, that code looks almost identical to what I'd be writing outside that context, but with an embedded focused runtime, ie embassy.
I can focus less on the runtime specifics as they use the same traits and interfaces. Compare this with say, using a small RTOS or rolling your own async environment, it's quite nice.
Much of what I need to learn to write the async code in embassy can cross over to other domains.
What's the alternative? I'm happy to use tokio, but i'm happy other folks can enjoy other executors (smol, async-std, glommio, etc). I think the situation is OK because tokio is well-maintained, even though it's not part of the standard library, and i'm afraid making it part of the standard library would make it harder to use other executors, and harder to port the standard library to other platforms.
But maybe my fears are unfounded.
> What's the alternative?
Traits in the stdlib for common functionality like "spawn" (a task) and things like async timers. Then executors could implement those traits and libraries could be generic over them.
Yep. We could have a system like how there's a global system allocator, but you can override it if you want in your app.
We could have something similar for a global async executor which can be overridden. Or maybe you launch your own executor at startup and register it with std, but after that almost all async spawn calls go through std.
And std should have a decent default executor, so if you don't care to customise it, everything should just work out of the box.
Good point, but the devil lies in the details. How should the timers behave? Is the clock monotonic? Are tasks spawned on the same thread? Different platforms and executors have different opinions. Maybe it's still possible and just a lot of work?
> Maybe it's still possible and just a lot of work?
Yeah, I think that's the current status. I believe it was for a long time (and possibly still is) blocked on language improvements to async traits (which didn't exist at all until relatively recently and still don't support dyn trait).
>the devil lies in the details
This is true, but perhaps not uniquely so, when compared to platform dependence of the standard libary already. File semantics, sync primitive gaurantees and implementations, timers and timer resolutions, etc have subtle differences between platforms that the Rust stdlib makes no further gaurantees about.
100% this
How nice would it be if there were ReadAsync and WriteAsync traits in the standard library.
Right now, every executor (and the futures crate) implements their own and there are compat crates to bridge the gaps.
It would make sense to have an official default async runtime in the standard library while keeping the door open to use any other runtime, just like we already have for the heap allocator or reference counting garbage collection.
There are issues in particular with core traits for IO or Stream being defined in third-party libraries like tokio, futures or its variants. I've seen many cases where libraries have to reexport such types, but they are pinned to the version they have, so you can end up with multiple versions of basic async types in the same codebase that have the same name and are incompatible.
As of now I don’t think there’s an alternative. I’m not a Rust expert but the core issue to me is that “async” goes beyond just having a Futures scheduler. Async stuff usually needs network, disk, os interaction, future utilities(spawn) and these are all things the runtime (tokio) provides. It’s pretty hard to be compatible with each other unless the language itself provides those.
That's not the core issue at all it's lifetimes and allocations.
Can you elaborate on this please? Do you mean that’s basically impossible for rust std to provide a default runtime that makes “everyone” (embedded on one end and web on the other) happy?
I think that's the problem in essence, yes. Different executors built on top of different primitives and having different executions strategies will have mutually incompatible constraints.
To spawn a future on tokio, it has to implement `Send`, because tokio is a work-stealing executor. That isn't the case for monoio or other non-work-stealing async executors, where tasks are pinned to the thread they are spawned on and so do not require `Send` or `Sync`, so you can use Rc/RefCell.
Moreover, the way that async executors schedule execution can be _different_. I have a small executor I made that is based on the runtime model of the JS event loop. It's single-threaded async, with explicit creation of new worker threads. That isn't a model that can "slot in" to a suite of traits that adequately represents the abstraction provided by tokio, because the abstraction of my executor and the way it schedules tasks are fundamentally different.
Any reasonably-usable abstraction for the concept of an async runtime would impose too many constraints on the implementation in the name of ensuring runtime-generic code can execute on any standard runtime. A Future, for better or worse, is a sufficiently minimal abstraction of async executability without assuming anything about how the polling/waking behavior is implemented.
> To spawn a future on tokio, it has to implement `Send`, because tokio is a work-stealing executor.
Tokio's default executor is a work-stealing multi-threaded executor, but it also has a local executor and a current-thread executor, which can run !Send futures.
Here are some alternatives for concurrent operations in rust that don't use Async. Which are available depend on the target, e.g. embedded/low-level vs GPOS. I use all of these across my Rust projects:
Most of you are already aware. I bring this up because I have observed that in the Rust OSS community (especially embedded) people sometimes refer to not using Async as blocking, and are not aware that Async isn't the only wya to manage concurrency. People new to it are learning it this way: "If you're not using Tokio or Embassy (Or some other executor), you are blocking a process."That's kind of wild... I'm relatively novice with Rust still, but I was pretty aware that the different executors weren't the only async option... I thought it was pretty cool you could opt into tokio for the bulk of async request work, but if I wanted to use a pool for specific workers, or something else on a more monolithic service/application, I could still launch my own threads for that use case pretty easily.
The hardest parts for me to grok really came down to lifetime memory management, for example a static/global dictionary as a cache, but being able to evict/recover entries from that dictionary for expired data... This is probably the use case that IMO is one of the least well documented, or at least lacking in discoverable tutorials etc.
The best alternative, by far, is don't require async. Async is much harder to work with than other methods of gaining concurrency, and its benefits (like not needing OS context switches) are irrelevant to most developers. There is no good reason that the majority of Rust libraries force their users into async in all its messiness.
As you mentioned Java, it’s interesting to notice that it has had similar problems throughout its history: logging (now it’s settled on slf4j but you still find libraries using something else), commons (first Apache Commons, now Guava), JSON (it has settled on Jackson but things like Gson and Simple-json are not uncommon to see), nullability annotations ( first with unofficial distributions of JSR-305 which never became official, then checker framework , and lately with everything migrating to JSpecify). All this basic stuff needs to be provided by the language to avoid this fragmentation and quasi de facto libraries from appearing.
The traditional approach in Java has been to let those things happen in third party space, then form an expert group to standardise a shared API for them. That was done with XML parsers and ORM fairly successfully. It doesn't always work, as with your examples - there was an attempt with logging, but it was done badly, JSR-305 ran around, etc. But I think it's a much better approach than the JDK maintainers trying to get it right first time.
But this fragmentation is what needed to make good software. If you put things in the standard library you're just adding a +1 to the fragmented landscape because for instance it will never be specialized enough to cover all use cases, so people will still use their own libraries, just like for instance c++ has three dozen distinct implementations of hash maps just because one cannot fit all cases
It could also be argued that putting a specific executor model into the standard library will make the problem worse because it will give library crates license to use it without considering alternatives because it is standard. At least today taking a dependency on a specific runtime is a well-known boondoggle .
Not only that, but there kind of is a defacto standard (tokio), which is pretty much the default if you aren't in a specific, resource constrained use case.
commons, is something that is eventually being migrated into the main, at least those that are decided to be required for most projects. I don't use apache commons or guava at all in java (now at 25 or 26, depending on project) - there are still some libs that depend on those, but I would argue that most use it out of inertia, than actual need.
As for slf4j, I still don't see any justification for an abstraction layer on top of logging. I never, ever migrated from one logger to another, and even if I did need to do it - it is very easy as most loggers are very similar. E.g. that's why I decided to use log4j2 in my latest project.
The logging implementation should be an application level decision. By using a facade like slf4j a library allows an application using any logging implementation to use it. That’s why libraries should use it.
Everyone doesn't use tokio. Almost everyone on desktop/server uses tokio, with a few macos specific things wrapping grand central dispatch. But the embedded world is full of custom runtimes.
It's very much possible to use rust for a lot of areas with async without needing to be dependent on tokio. I think it's really just the web/server stuff that's entirely tokio dependent. Writing libraries to be executor agnostic is not terribly difficult but does require some diligence which isn't necessarily present in most of the community.
It really depends on the abstraction model of the library. If the library needs to actually read/write a file, it either needs to depend on a runtime or provide some horrific abstraction over the process it will use to do that. This doesn't apply to sync IO libraries which can just use the Standard Library.
Web/server frameworks have to bind to a runtime because they have to make decisions about how to connect to a socket. Hyper is sufficiently abstract that it doesn't require any runtime, but using hyper directly provides no framework-like benefits and requires that you make those decisions and provide a compatible socket-like implementation for sending requests.
That's the thing though, it's possible but it makes the simple hello world example more tedious. It's totally possible to make an abstraction layer, provide a tokio implementation out of the box but leave the door open for other implementations to slot in. Anyone who's written portable code for non posix systems is used to this experience. Standardization is definitely better but it also has its own share problems as it can limit what's possible. I expect that the decision to delay standardizing on these interfaces too early will end up leading to a better long term design. Especially if major improvements to async are on the horizon and can alter the final shape of that standard.