Decorators don't work for functions, unfortunately, so wouldn't work in this case. You'd need to add a bunch of class boilerplate to make it work.
JavaScript didn't have a lot of great options for this kind of statically-declared metaprogramming, which is why the "use X"; syntax has become so popular in various spaces. It's definitely not my favourite approach, but I don't think there's any clear "best" solution here.
You can use generator fns to achieve the exact same thing without magic strings or bundles. And the bonus is that you can debug it.
At Resonate, we are using generators (both for the typescript sdk and the python sdk) to implement Distributed Async Await, Resonate's Durable Execution Framework. Because generators transfer control back to the caller, you can essentially build your own event loop and weave distributed coordination (RPCs and callbacks) and distributed recovery (restarts) right into the execution with fairly little lines of code.
Disclaimer: I'm the CEO of Resonate
I don't believe you can. I believe what they're trying to do is rewrite the underlying function into separate functions that can be called in any order and combination. That's not possible with generators. With a generator, I can pause between steps, but I can't, say, retry the third step four times until it succeeds, suspend the entire process, and then jump straight to the fourth step. Or I can't run different steps in different processes on different machines depending on load balancing concerns.
I suspect that's why they need to transform the function into individual step functions, and for that they need to do static analysis. And if they're doing static analysis, then all the concerns detailed here and elsewhere apply and you're basically just picking your favourite of a bunch of syntaxes with different tradeoffs. I don't like magic strings, but at least they clearly indicate that magic is happening, which is useful for the developer reading the code.
> With a generator, I can pause between steps, but I can't, say, retry the third step four times until it succeeds, suspend the entire process, and then jump straight to the fourth step.
If I recall correctly, other solutions in this space work by persisting & memoizing the results of the steps as they succeed, so the whole thing can be rerun and anything already completed uses the memoized result.
That's what we do at Resonate: We build Distributed Async Await, basically async await with durability provided by Durable Promises. A Durable Promises is the checkpoint (the memoization device). When the generator restarts, the generator skips what has already been done.
We don't have workflow and steps tho, like async await, just functions all the way down.
Disclaimer: I'm the CEO of resonate
You are partly correct, if each step is another generator function you can retry the steps until success or fail, and even create a small framework around it.
That still requires the process running the outer generator to stay active the entire time. If that fails, you can't retry from step three. You need some way of starting the outer function part way through.
You can do that manually by defining a bunch of explicit steps, their outputs and inputs, and how to transform the data around. And you could use generators to create a little DSL around that. But I think the idea here is to transform arbitrary functions into durable pipelines.
Personally, I'd prefer the more explicit DSL-based approach, but I can see why alternatives like this would appeal to some people.
When we built Distributed Async Await, we went a step further: Every time the generator instance awaits, we "kill" the generator instance (you cannot really kill a generator, but you can just let it go out of scope) and create a new one when the awaited computation completes. So in essence, we built resume semantics on top of restart. We were inspired by the paper Crash Only Software https://dslab.epfl.ch/pubs/crashonly.pdf
yeah, that’s what I meant with a tiny framework, you can add features to it and have it implement à la dsl.
These alternatives just hide the implementation and make debugging and debugging expanding/configuring unavailable
Yeah, I completely agree that the implementation hiding makes me very uncomfortable with this sort of approach. It's the same with a lot of Vercel's work - it's very easy to do simple things on the happy path, but the more you stray from that path, the more complex things become.
Since this magic string requires a preprocessor step anyway, there's no reason they couldn't make it a decorator that works on functions. I don't see the problem?
But then it's not valid TypeScript anymore. So all the other tooling breaks: syntax highlighting, LSP, Linter, ...
Looking at the AST [0], this doesn't seem to be the case. Since Typescript already supports decorators in other contexts, it successfully parses everything and identifies the decorator to boot. Since you're working in a preprocessor context anyway, there's a number of options to make all of this work well together.
[0] https://ts-ast-viewer.com/#code/GYVwdgxgLglg9mABMOcAUBKRBvAU...
Instead of decorators, it could be just a higher-order function. Which could handle it easily and in any scenario that interchanges between TS/JS.
Well yes, that's the sane way anyone would reach for first, but that's clearly not new-age enough by Vercel's standards, so here we are. Similar to the other magic string interactions, all of this is a roundabout way of introducing platform lock-in.
The problem with higher-order functions is that you can't guarantee that your static analysis is correct, and you end up with functions that look like functions but don't behave like functions.
It's similar to the problem that `require` had in browsers, and the reason that the `import` syntax was chosen instead. In NodeJS. `require` is just a normal function, so I can do anything with it that I could do with a normal function, like reassigning it or wrapping it or overwriting it or whatever. But in browsers (and in bundlers), all imports need to be statically known before executing the program, so we need to search for all calls to `require` statically. This doesn't work in JavaScript - there are just too many ways to dynamically call a function with arguments that are unknown at runtime or in such a way that the calling code is obscured.
That's why the `import` syntax was introduced as an alternative to `require` that had rules that made it statically analysable.
You'd end up with a similar situation here. You want to know statically which functions are being decorated, but JavaScript is a deeply dynamic language. So either you end up having a magic macro-like function that can only be called in certain places and gets transpiled out of the code so it doesn't appear at runtime, or you have a real function but you recognise that you won't be able to statically find all uses of that function and will need to set down certain rules about what uses can do with that function.
Either way, you're going to be doing some magic metaprogramming that the developer needs to be aware of. The benefit of the "use X" syntax is that it looks more magical, and therefore better indicates to the reader that magic is happening in this place. Although I agree that it's not my personal preference.
But you can see there that it flags it as an error. The parser is lenient, sure, and tries to parse the rest of the file despite the error, but it's still not valid syntax, so you'd need to update all your other tools (LSP, formatter, linter, etc) to understand the new syntax.
bunch of class "boilerplate" is 2 lines to wrapping the thing in a class, which is most likely the right thing to do anyway. You would want to group the durable functions and manage the dependencies in some way.