> But it has the key limitation that you don't know how much stack to allocate. If you allocate too much stack in advance, you end up being not much cheaper than OS threads; but if you allocate too little stack, you can easily hit stack overflow.
With a 64-bit address space you can reserve large contiguous chunks (e.g. 2MB), while only allocating the minimum necessary for the optimistic case. The real problem isn't memory usage, per se, it's all the VMA manipulation and noise. In particular, setting up guard pages requires a separate VMA region for each guard (usually two per stack, above and below). Linux recently got a new madvise feature, MADV_GUARD_INSTALL/MADV_GUARD_REMOVE, which lets you add cheap guard pages without installing a distinct, separate guard page. (https://lwn.net/Articles/1011366/) This is the type of feature that could be used to improve the overhead of stackful coroutines/fibers. In theory fibers should be able to outperform explicit async/await code, because in the non-recursive, non-dynamic call case a fiber's stack can be stack-allocated by the caller, thus being no more costly than allocating a similar async/await call frame, yet in the recursive and dynamic call cases you can avoid dynamic frame bouncing, which in the majority of situations is unnecessary--the poor performance of dynamic frame allocation/deallocation in deep dynamic call chains is the reason Go switched from segmented stacks to moveable stacks.
Another major cost of fibers/thread is context switching--most existing solutions save and restore all registers. But for coroutines (stackless or stackful), there's no need to do this. See, e.g., https://photonlibos.github.io/blog/stackful-coroutine-made-f..., which tweaked clang to erase this cost and bring it line with normal function calls.
> Go addresses this by allocating chunks of stack on demand, but that still imposes a cost and a dependency on dynamic allocation.
The dynamic allocation problem exists the same whether using stackless coroutines, stackful coroutines, etc. Fundamentally, async/await in Rust is just creating a linked-list of call frames, like some mainframes do/did. How many Rust users manually OOM check Boxed dyn coroutine creation? Handling dynamic stack growth is technically a problem even in C, it's just that without exceptions and thread-scoped signal handlers there's no easy way to handle overflow so few people bother. (Heck, few even bother on Windows where it's much easier with SEH.) But these are fixable problems, it just requires coordination up-and-down the OS stack and across toolchains. The inability to coordinate these solutions does not turn ugly compromises (async/await) into cool features.
> First, you have to ban recursion, or else add some kind of language mechanism for tracking how many times a function can possibly recurse. [snip] > > Second, you have to ban dynamic calls (i.e. calls to function pointers)
Both of which are the case for async/await in Rust; you have to explicitly Box any async call that Rust can't statically size. We might frame this as being transparent and consistent, except it's not actually consistent because we don't treat "ordinary", non-async calls this way, which still use the traditional contiguous stack that on overflow kills the program. Nobody wants that much consistency (too much of a "good" thing?) because treating each and every call as async, with all the explicit management that would entail with the current semantics would be an indefensible nightmare for the vast majority of use cases.
> If you allocate too much stack in advance, you end up being not much cheaper than OS threads;
Maybe. a smart event loop could track how many frames are in flight at any given time and reuse preallocated frames when their frames dispatch out.