> Language designers who studied the async/await experience in other ecosystems concluded that the costs of function coloring outweigh the benefits and chose different paths.
Not really. The author provides Go as evidence, but Go's CSP-based approach far predates the popularity of async/await. Meanwhile, Zig's approach still has function coloring, it's just that one color is "I/O function" and the other is "non-I/O function". And this isn't a problem! Function coloring is fine in many contexts, especially in languages that seek to give the user low-level control! I feel like I'm taking crazy pills every time people harp about function coloring as though it were something deplorable. It's just a bad way of talking about effect systems, which are extremely useful. And sure, if you want to have a high-level managed language like Go with an intrusive runtime, then you can build an abstraction that dynamically papers over the difference at some runtime cost (this is probably the uniformly correct choice for high-level languages, like dynamic or scripting languages (although it must be said that Go's approach to concurrency in general leaves much to be desired (I'm begging people to learn about structured concurrency))).
CSP is a theory about synchronization and implies nothing about green threads or M:N scheduling. Go could have used OS threads and called it CSP.
Certainly it’s true that Go invented neither, both Erlang and Haskell had truly parallel green threads without function coloring before Go or Node existed.
I agree with you, but the big difference between function arguments and effect systems is that the tools we have for composing functions with arguments are a lot simpler to deal with than the tools we have for composing effects.
You could imagine a programming language that expressed “comptime” as a function argument of a type that is only constructible at compile-time. And one for runtime as well, and then functions that can do both can take the sum type “comptime | runtime”.
Or use OCaml 5 which has a full algebraic effects system that solves the function coloring problem while still being highly performant.
How do they solve it?
That is an unfair characterization of Zig. The OP correctly points out:
> Function signatures don’t change based on how they’re scheduled, and async/await become library functions rather than language keywords.
The functions have the same calling conventions regardless of IO implementation. Functions return data and not promises, callbacks, or futures. Dependency injection is not function coloring.
> The functions have the same calling conventions regardless of IO implementation.
Okay, then by that definition, Rust doesn't have colored functions either, because `async fn foo() -> Bar {` in Rust is just syntax sugar for `fn foo() -> impl Future<Output=Bar> {` (that's a normal function that returns a future), and `Future` is just a normal trait that provides a `poll` method; there's no different calling convention anywhere. So which is it? Either Rust and Zig both have colored functions, or neither do.
These things _are_ function colouring, but they show function colouring isn't scary or hard.
The original function colouring essay was much more about JavaScript's implementation than a general statement.
If JavaScript had exposed a way for a synchronous function to call back into the runtime to wait for an async function to complete, it would still be just as coloured, but no one would be complaining about colour (deadlocks yes, but that's another kettle of fish).
Boost.Asio (2005) is surely worth a mention. But the pattern predates this by decades. Green threads, what Goroutines are, comes from the 1990's.
I mean Java's Loom feels like the 'ultimate' example of the latter for the _ordinary_ programmer, in that it effectively leaves you just doing what looks like completely normal threads however you so please, and it all 'just works'.
Java has gone full circle.
Java had green threads in 1997, removed them in 2000 and brought them back properly now as virtual threads.
I'm kinda glad they've sat out the async mania, with virtual threads/goroutines, the async stuff just feels like lipstick on a pig. Debugging, stacktrackes etc. are just jumbled.
I don't think comparing 97's green threads to virtual threads ever made sense.
Like their purpose/implementation everything is just so different, they don't share anything at all.
In Rust debugging and stacktraces are perfectly fine because async/futures compile to a perfect state machine.
They are not perfectly fine. If a task panics then you will get the right stack trace, but there is no way to get a stack trace for a task that’s currently waiting. (At least not without intrusive hacks.)
Would this be considered an intrusive hack?
https://docs.rs/tokio/latest/%20tokio/runtime/struct.Handle....
> This functionality is experimental, and comes with a number of requirements and limitations.
I assume that answers your question.
So once it's out of the experimental stage it won't be an intrusive hack anymore?
They stopped at the Promises level with CompletableFuture that lead to "colored frameworks" like WebMVC vs. WebFlux in Spring.
Who is they? Java has moved past those promise based API and avoided async/await mistake.
Java didn't really "sit it out". It launched CompletableFutures, CompletionStages, Sources and Sinks, arguably even streams. All of those are standard library forms of async programming. People tried to make it catch on, but the experience of using it, The runtime wrapping all your errors in completion exceptions, destroying your callstacks, just made it completely useless.
Every Java codebase using something like Flux serves as a datapoint in favor of this argument - they're an abomination to read, reason about or (heaven help) debug.
I'm curious how escape analysis works with virtual threads. With the asynchronous model, an object local to a function will be migrated to the old generation heap while the external call gets executed. With virtual threads I imagine the object remains in the virtual thread "stack", therefore reducing pressure in garbage collection.
The initial Loom didn't really provide the semantics and ergonomics of async/await which is why they immediately started working on structured concurrency.
And for my money I prefer async/await to the structured concurrency stuff..
What should people read to learn about structured concurrency?
I think the clearest sales pitch comes from this post from the author of Trio, which is an implementation of structured concurrency for Python: https://vorpus.org/blog/notes-on-structured-concurrency-or-g... .
Perhaps java's related JEPs could be a good starting point?
https://openjdk.org/jeps/505
There are also related discussions on other platforms that are worthy to read.
In my experience people complain about it because they are coming from a blocking first mindset. They're trying to shoehorn async calls into an inherently synchronous structure.
A while back I just started leaning in. I write a lot of Python at work, and anytime I have to use a library that's relies on asyncio, I just write the entire damn app as an asynchronous one. Makes function coloring a non-issue. If I'm in a situation where the two have to coexist, the async runtime gets its own thread and communication back and forth is handled at specific boundaries.
>In my experience people complain about it because they are coming from a blocking first mindset. They're trying to shoehorn async calls into an inherently synchronous structure.
There's no "inherently synchronous structure", at least not in Javascript. The nature is synchronous, asynchronous is an illusion built on top of it. Which is why you can easily block an "asynchronous" program:
on any async function will do.JavaScript execution is synchronous on a single call stack. That's why they added Workers which is different to async.
Rust's Tokio and co are also blocking. You need threads to get something that's not an inherently synchronous with merely a facade or cooperative asychronicity.
> Makes function coloring a non-issue.
Yes, having to rewrite literally all of your code because you need to use an async function somewhere is an issue.
An even bigger issue is that now you have two (incompatible!) versions of literally every library dependency.
I'm usually writing applications, not libraries, so it's a non-issue for me.
I was talking about when writing from scratch.
> They're trying to shoehorn async calls into an inherently synchronous structure.
You can make any async system synchronous. It's much harder to mske a sync sydtem asynchronous. (Misquoting from something Erlang-related).
There are many cases when I don't care if a function call is asynchronous. I'm happy to wait for the result. Yet too many systems tell me I can't, for no good reason.