That still requires the process running the outer generator to stay active the entire time. If that fails, you can't retry from step three. You need some way of starting the outer function part way through.
You can do that manually by defining a bunch of explicit steps, their outputs and inputs, and how to transform the data around. And you could use generators to create a little DSL around that. But I think the idea here is to transform arbitrary functions into durable pipelines.
Personally, I'd prefer the more explicit DSL-based approach, but I can see why alternatives like this would appeal to some people.
When we built Distributed Async Await, we went a step further: Every time the generator instance awaits, we "kill" the generator instance (you cannot really kill a generator, but you can just let it go out of scope) and create a new one when the awaited computation completes. So in essence, we built resume semantics on top of restart. We were inspired by the paper Crash Only Software https://dslab.epfl.ch/pubs/crashonly.pdf
yeah, that’s what I meant with a tiny framework, you can add features to it and have it implement à la dsl.
These alternatives just hide the implementation and make debugging and debugging expanding/configuring unavailable
Yeah, I completely agree that the implementation hiding makes me very uncomfortable with this sort of approach. It's the same with a lot of Vercel's work - it's very easy to do simple things on the happy path, but the more you stray from that path, the more complex things become.