They are compiler-generated FSMs, but I think it's worth noting that the C++ design was landed in a way that precluded many people from ever seriously considering using them, especially due to the implicit allocation. The reason you are using C++ in the first place is because you care about details like allocation, so to me this is a gigantic fumble.

Rust gets it right, but has its own warts, especially if you're coming from async in a GC world. But there's no allocation; Futures are composable value types.

> The reason you are using C++ in the first place is because you care about details like allocation, so to me this is a gigantic fumble.

I wouldn't say that applies to everybody. I use C++ because it interfaces with the system libraries on every platform, because it has class-based inheritance (like Java and C#, unlike Rust and Zig) and because it compiles to native code without an external runtime. I don't care to much about allocations.

For me the biggest fumble is that C++ provides the async framework, but no actual async stdlib (file io and networking). It took a while for options to be available, and while eg Asio works nicely it is crazily over engineered in places.

I like what Rust offers over C++ in terms of safety and community culture, but I don't enjoy being a tool builder for ecosystem gaps, I rather spend the time directly using the tools that already exist, plus I have Java and .NET ecosystems for safety, as I am really on the automatic resource management side.

Zig, is really Modula-2 in C's cloathing, I don't like the kind of handmade culture that has around it, and its way of dealing with use after free I can also get in C and C++, for the last thirty years, it is a matter of actually learning the tooling.

Thus C++ it is, for anything that isn't can't be taken over by a compiled managed language.

I would like to use D more, but it seems to have lost its opportunity window, although NASA is now using it, so who knows.

The C++ model is that in theory there is an allocation, in practice depending on how a specific library was written, the compiler would be able to elide the allocation.

It is the same principle that drives languages like Rust in regards to being safe by default, in theory stuff like bounds checks cause a performance hit, in practice compilers are written to elide as much as possible.

The required allocation make them awkward to use for short lived automatic objects like generators. But for async operations were you are eventually going to need a long lived context object anyway, it is a non-issue especially given the ability to customize allocators.

I say this as someone that is not a fan of the stackess coroutines in general, and the C++ solution in particular.

You can write stuff like this:

  void *operator new(std::size_t sz, Foo &foo, Bar &bar) { return foo.m_Buffer; /* should be std::max_align_t-aligned \*/ }
and force all coroutines of your Coroutine type to take (Foo &, Bar &) as arguments this way (works with as many overloads as you like).

I think you missed an important point in the parent comment. You can override the allocation for C++ coroutines. You do have control over details like allocation.

C++ coroutines are so lightweight and customizable (for good and ill), that in 2018 Gor Nishanov did a presentation where he scheduled binary searches around cache prefetching using coroutines. And yes, he modified the allocation behavior, though he said it only resulted in a modest improvement on performance.