Of all the syntax options they could've gone with, they settled on what I would say is arguably the worst. If you want a one-liner, decorators are widely used across different languages and Typescript supports them as well.

Decorators don't work for functions, unfortunately, so wouldn't work in this case. You'd need to add a bunch of class boilerplate to make it work.

JavaScript didn't have a lot of great options for this kind of statically-declared metaprogramming, which is why the "use X"; syntax has become so popular in various spaces. It's definitely not my favourite approach, but I don't think there's any clear "best" solution here.

You can use generator fns to achieve the exact same thing without magic strings or bundles. And the bonus is that you can debug it.

At Resonate, we are using generators (both for the typescript sdk and the python sdk) to implement Distributed Async Await, Resonate's Durable Execution Framework. Because generators transfer control back to the caller, you can essentially build your own event loop and weave distributed coordination (RPCs and callbacks) and distributed recovery (restarts) right into the execution with fairly little lines of code.

Disclaimer: I'm the CEO of Resonate

I don't believe you can. I believe what they're trying to do is rewrite the underlying function into separate functions that can be called in any order and combination. That's not possible with generators. With a generator, I can pause between steps, but I can't, say, retry the third step four times until it succeeds, suspend the entire process, and then jump straight to the fourth step. Or I can't run different steps in different processes on different machines depending on load balancing concerns.

I suspect that's why they need to transform the function into individual step functions, and for that they need to do static analysis. And if they're doing static analysis, then all the concerns detailed here and elsewhere apply and you're basically just picking your favourite of a bunch of syntaxes with different tradeoffs. I don't like magic strings, but at least they clearly indicate that magic is happening, which is useful for the developer reading the code.

> With a generator, I can pause between steps, but I can't, say, retry the third step four times until it succeeds, suspend the entire process, and then jump straight to the fourth step.

If I recall correctly, other solutions in this space work by persisting & memoizing the results of the steps as they succeed, so the whole thing can be rerun and anything already completed uses the memoized result.

That's what we do at Resonate: We build Distributed Async Await, basically async await with durability provided by Durable Promises. A Durable Promises is the checkpoint (the memoization device). When the generator restarts, the generator skips what has already been done.

We don't have workflow and steps tho, like async await, just functions all the way down.

Disclaimer: I'm the CEO of resonate

You are partly correct, if each step is another generator function you can retry the steps until success or fail, and even create a small framework around it.

That still requires the process running the outer generator to stay active the entire time. If that fails, you can't retry from step three. You need some way of starting the outer function part way through.

You can do that manually by defining a bunch of explicit steps, their outputs and inputs, and how to transform the data around. And you could use generators to create a little DSL around that. But I think the idea here is to transform arbitrary functions into durable pipelines.

Personally, I'd prefer the more explicit DSL-based approach, but I can see why alternatives like this would appeal to some people.

When we built Distributed Async Await, we went a step further: Every time the generator instance awaits, we "kill" the generator instance (you cannot really kill a generator, but you can just let it go out of scope) and create a new one when the awaited computation completes. So in essence, we built resume semantics on top of restart. We were inspired by the paper Crash Only Software https://dslab.epfl.ch/pubs/crashonly.pdf

yeah, that’s what I meant with a tiny framework, you can add features to it and have it implement à la dsl.

These alternatives just hide the implementation and make debugging and debugging expanding/configuring unavailable

Yeah, I completely agree that the implementation hiding makes me very uncomfortable with this sort of approach. It's the same with a lot of Vercel's work - it's very easy to do simple things on the happy path, but the more you stray from that path, the more complex things become.

Since this magic string requires a preprocessor step anyway, there's no reason they couldn't make it a decorator that works on functions. I don't see the problem?

But then it's not valid TypeScript anymore. So all the other tooling breaks: syntax highlighting, LSP, Linter, ...

Looking at the AST [0], this doesn't seem to be the case. Since Typescript already supports decorators in other contexts, it successfully parses everything and identifies the decorator to boot. Since you're working in a preprocessor context anyway, there's a number of options to make all of this work well together.

[0] https://ts-ast-viewer.com/#code/GYVwdgxgLglg9mABMOcAUBKRBvAU...

Instead of decorators, it could be just a higher-order function. Which could handle it easily and in any scenario that interchanges between TS/JS.

Well yes, that's the sane way anyone would reach for first, but that's clearly not new-age enough by Vercel's standards, so here we are. Similar to the other magic string interactions, all of this is a roundabout way of introducing platform lock-in.

The problem with higher-order functions is that you can't guarantee that your static analysis is correct, and you end up with functions that look like functions but don't behave like functions.

It's similar to the problem that `require` had in browsers, and the reason that the `import` syntax was chosen instead. In NodeJS. `require` is just a normal function, so I can do anything with it that I could do with a normal function, like reassigning it or wrapping it or overwriting it or whatever. But in browsers (and in bundlers), all imports need to be statically known before executing the program, so we need to search for all calls to `require` statically. This doesn't work in JavaScript - there are just too many ways to dynamically call a function with arguments that are unknown at runtime or in such a way that the calling code is obscured.

That's why the `import` syntax was introduced as an alternative to `require` that had rules that made it statically analysable.

You'd end up with a similar situation here. You want to know statically which functions are being decorated, but JavaScript is a deeply dynamic language. So either you end up having a magic macro-like function that can only be called in certain places and gets transpiled out of the code so it doesn't appear at runtime, or you have a real function but you recognise that you won't be able to statically find all uses of that function and will need to set down certain rules about what uses can do with that function.

Either way, you're going to be doing some magic metaprogramming that the developer needs to be aware of. The benefit of the "use X" syntax is that it looks more magical, and therefore better indicates to the reader that magic is happening in this place. Although I agree that it's not my personal preference.

But you can see there that it flags it as an error. The parser is lenient, sure, and tries to parse the rest of the file despite the error, but it's still not valid syntax, so you'd need to update all your other tools (LSP, formatter, linter, etc) to understand the new syntax.

bunch of class "boilerplate" is 2 lines to wrapping the thing in a class, which is most likely the right thing to do anyway. You would want to group the durable functions and manage the dependencies in some way.

Yea but it's a vercel product and they also pushed the 'use server' and 'use client' directives and probably want to build on them.

Absolutely bizarre decisions.

better then when they try to contribute to react with proper functions like useActionState and ship so many cripling bugs the dev just leave the issues on read.

[deleted]

My bad, this is open telemetry stuff to check the status of your servers, not for vendors to extract as much data as they can from you.

> I'm trying to find how they implemented the "use workflow" thing, and before I could find it I already found https://github.com/vercel/workflow/blob/main/packages/core/s...

> Telemetry is part of the core.

> Yuck.

It's on the landing page, there is not even a standalone mode yet, so you can't use it with node.js you need to use with Next.js. All to say, it's just early alpha preview, I would wait the project to mature before considering using it anywhere.

My bad, this is open telemetry stuff to check the status of your servers, not for the vendor to slurp as much data from you as possible.

This is Open Telemetry functions you're referencing. Being able to trace, profile and debug your own code that's executing in a highly distributed environment is a pretty useful thing. This isn't (necessarily) user behavior telemetry.

At least understand what you're looking at before getting the ick.

That's fair. I'm too accustomed to seeing the other type of telemetry shoved in all over the place.

Good luck running a server without any sort of telemetry. How will you debug stuff without logs and traces? Seems to me that the priorities are with practical, real everyday engineering concerns.

It’s apparently an swc compiler plugin.

or no-op functions like useWorkflow() (with some kind of stub that prevents dead code elimination).

This seems pretty similar to Cloudflare Workflows (https://developers.cloudflare.com/workflows/), but with code-rewriting to use syntax magic (functions annotated with "use workflow" and "use step") instead of Cloudflare's `step.do` and `step.sleep` function calls. (I think I lightly prefer Cloudflare's model for not relying on a code-rewriting step, partly because I think it's easier for programmers to see what's going on in the system.) Workflow's Hooks are similar to Cloudflare's `step.waitForEvent` + `instance.sendEvent`. It's kind of exciting to see this programming model get more popular. I wonder if the ecosystem can evolve into a standardized model.

Actually, both Vercel and Cloudflare are based off of the API that we built at https://inngest.com (disclaimer, I'm a founder).

I strongly believe that being obvious about steps with `step.run` is important: it improves o11y, makes things explicit, and you can see transactional boundaries.

So vercel is adamant on making nextjs apps behavior completely unpredictable and hidden behind tons of magic code?

At least in any other framework library I can just command click and see why things are not working, place breakpoints and even modify code.

It’s a great business model with epic lock in. Bored front end devs keep indulging / enabling it so why stop?

[deleted]

So this seems similar to https://temporal.io/, am I reading this right? I used that briefly a few years ago and it was pretty nice at the time. It did make some features much easier to model like their welcome email example. Would love to hear from someone with extensive temporal experience, iirc the only drawback was on the infra side of things.

It's also similar to https://www.restate.dev/.

And to DBOS too

So at it's core this is "just" a toolkit to add automatic retries to functions inside another function? I don't know if the audience Vercel is targetig knows about idempotency as well as they should before plastering all their functions with "use workflow".

I guess in the end it's another abstraction layer for queues or state machines and another way to lock you into Vercel.

I'm excited about this because durable workflows are really important for making AI applications production ready :) Disclaimer: I'm working on DBOS, a durable workflow library built on Postgres, which looks complementary to this.

I asked their main developer Dillon about the data/durability layer and also the compilation step. I wonder if adding a "DBOS World" will be feasible. That way, you get Postgres-backed durable workflows, queues, messaging, streams, etc all in one package, while the "use workflow" interface remains the same.

Here is the response from Dillon, and I hope it's useful for the discussion here:

> "The primary datastore is dynamodb and is designed to scale to support tens of thousands of v0 size tenants running hundreds of thousands of concurrent workflows and steps."

> "That being said, you don't need to use Vercel as a backend to use the workflow SDK - we have created a interface for anyone to implements called 'World' that you can use any tech stack for https://github.com/vercel/workflow/blob/main/packages/world/..."

> "you will require a compiler step as that's what picks up 'use workflow' and 'use step` and applies source transformations. The node.js run time limitations only apply to the outer wrapper function w/ `use workflow`"

This is actually pretty cool. We have a similar custom library at Xbox that's used extensively across all of our services.

I do wish that there was some kind of self-hostable World implementation at launch. If other PAAS providers jump onto this, I could see this sticking around.

Hi I’m Gal from the team. Thanks! We did ship a reference Postgres implementation. It would receive more love now that we open sourced, but we can’t call it “production ready” without running it in production.

But we did have convos in the last couple of days on what we can do next on the pg world ;D

Azures Durable Task Framework or something else? I guess there’s nothing public on it, which is a shame because it sounds interesting

It's not really clear how you "update" a workflow/step method?

What happens if you saw a bug, or you want to update and change a workflow? Is there a way to discard / upgrade the existing in-memory workflows that are being executed (and correspond to the previous version) so they are now "updated"?

I'd rather be explicit about what's going on at each step. That way idempotent functions can be handled differently, retry limits can be applied, and no separate preprocessor is required.

    export async function welcome(userId: string) {
      const user = await retry(() => getUser(userId));
      const { subject, body } = await retry(() => generateEmail({
        name: user.name, plan: user.plan
      }));
      const { status } = await retry(() => sendEmail({
        to: user.email,
        subject,
        body,
      }), 2);
      return { status, subject, body };
    }

Skipping the part that defines what a "durable" function means is typical Vercel.

It's probably a common async functional programming term that I don't know.

But when "algebraic effects" were all the rage, the people evangelizing them at least cared to explain first in their blog posts.

This one instead straight jumps via AI agents (what does this have to do with TypeScript syntax extensions?) to "installation".

No thanks.

Edit:

I've read the examples after commenting and it's understandable, still explained in a post-hoc way that I really dislike, especially when it comes to a proprietary syntax extension.

Also the examples for their async "steps" look like they are only one step away from assuming that any async function is some special Next.js thing with assumptions about clients and servers, not "just" async functions with some annotation to allow the "use workflow" to do its magic.

Am I stupid, or does the page not actually explain that workflow is?

It doesn't explain it on the landing page. Even skimming their docs, it seems like you mostly have to infer the purpose of this based on the features.

I can almost with 100% certainty see this being one of those things that ultimately, after years of just blantantly ignoring something as simple as basic syntax rules, being redefined to something that is actually valid JavaScript / TypeScript.

> For instance, Math.random and Date constructors are fixed in workflow runs, so you are safe to use them and the framework ensures that the values don't change across replays.

How do you create an environment where everything is deterministic? Do they invoke every supported non deterministic function when creating the environment and rewrite those functions to return the values from the environment's creation time? Is there something more complex happening?

Use client and use server aren’t great, but the fact they had to be declared at the top of a file was at least clear.

Starting to scatter magic strings throughout a code base feels like a real step back.

There's nothing about "use server" that requires it to be at the top of the file though, it can go in function bodies and you have a typed RPC method.

I think "use client" is the only one that has to go at the top of a file.

You are correct. Use server can be slapped in many places

Shameless plug: I’ve been working on a similar thing for Golang: zenyth.dev

Durability is achieved by running the workflows in a wasm runtime.

Guillermo + Vercel should slow down and concentrate more on fixing the bugs with nextjs instead of adding more features no one asked for.

Somewhat related since this about "workflows" and not cloud function, but are there any practical benefits to cloud functions other than the fact that it's cheaper for the providers as they don't have to run an entire bespoke runtime for every customer/application?

im becoming increasinglymore convinced that workflows are the wrong model

just build state machines folks

workflows is just short for state machine DSL

but with history length and replay determinism landmines

This seems... bad, inelegant.

Not only is it ugly in terms of language design, the feature depends on over-engineered frameworks like Next.js and Nitro. Magic string literals that rewrite your functions? No thanks.

Is this a durable execution engine/solution?

Docs seem really underbaked.

- where does the state and telemetry get stored?

- if something is sleeping for 7 days, and you release a new version in that time, what is invoked?

- how do you configure retries? Looks like it retries forever

And I echo the hatred of the magic strings. Terrible dx

"use turnMyBrainOff";

"use blackBoxWrapperForEverything";

"Next.js only" Thanks, no thanks. Neat idea though. (I'm mostly using Bun+React)

This is a horrid pattern to try to get people to rely on. Stop with the magic

can anyone point to the "Durable" part?

looking at the docs and examples, I see Workflows and Steps and Retries, but I don't see any Durable yet. none of the examples really make it clear how or where anything gets stored

That depends on the “world”. We built an adapter interface so you could store the data (and other things) anywhere you want. There are some docs which are wip regarding that: https://useworkflow.dev/docs/deploying/world

thanks, helpful read!

Lost me at "use workflow" directive. This and Next16 expanding the set of directives just makes me question if I'm the mad man for thinking they are absolutely terrible.

well that's some scary typescript syntax. i didnt know a string constant at the top of a function could change the operation.

or is this some extra compilation step to rewrite the code?

"use strict" has been around since 2009. That being said, this is not a TypeScript or React feature but yet another black box magic NextJS feature to try to lock you into the Vercel ecosystem.

Must be some compile step. Reminds me of “use strict”.

[deleted]

i hate the new pattern of using these magic strings everywhere. “use workflow”, “use client”, etc etc.

I don’t like having custom bundler logic for my code.

Custom bundler + telemetry already included. Smells way too much like Microsoft, too much like lock-in with a deal that gets worse and worse.

"don't use next"

"don't use react"

Please, bundling React with Next is completely foolish. React is open, battle-hardened, type safe, and well-documented, while Next is... a vendor lock-in trojan horse targeting low-knowledge developers with concepts that seem beginner-friendly.

I can understand making legit criticisms of React, no doubt the hooks transition had some issues and the library has a high level of complexity without a clear winner in the state management domain, but pretending React is peddling shit like "use workflow" is frankly low effort.

Just shows you how absolutely little people know about the web ecosystem - most people heard something once or twice from someone else and just assume its true - to make matters worse, you have the typical HN "vanilla html and js only!!!" bandwagon which, if you try to use for any serious web application will only lead you down a path of much pain and suffering. I've commented many times in many other threads that I just don't get it; I probably never will.

Mate, don't get mad at us just because you can't code without a framework.

React was created to make low skilled people capable of shipping low quality code. If that is the only thing you can do, I'd be careful about calling yourself fullstackchris

Mate, real programmers use assembly.

Every durable function platform is so disappointing because they all try to work without solving the hard problem: serializing, snapshotting, and restoring a running program. They all have some API that pushes the work of state management and safepoints on to you the developer.

Once you have the primitive of real durable compute all the hard bits fall away. And it's not as if this is some fantasy, VM live migration is a real working example of it working. Then you just write your program in the grug way, use your language's built in retry tools and store state in normal variables because the entire thing is durable including memory, cpu state, gpu state, network state, open files, etc..

[dead]