Nice project.

One thing Cloudflare Workers gets right is strong execution isolation. When self-hosting, what’s the failure model if user code misbehaves? Is there any runtime-level guardrail or tracing for side-effects?

Asking because execution is usually where things go sideways.

Workers that hit limits (CPU, memory, wall-clock) get terminated cleanly with a clear reason. Exceptions are caught with stack traces (at least it should lol), logs stream in real-time.

What's next: execution recording. Every invocation captures a trace: request, binding calls, timing. Replay locally or hand it to an AI debugger. No more "works on my machine".

I think the CLI will look like:

# Replay a recorded execution:

openworkers replay --execution-id abc123

# Replay with updated code, compare behavior:

openworkers replay --execution-id abc123 --worker ./dist/my-fix.js

Production bug -> replay -> AI fix -> verified -> deployed. That's what I have in mind.

[deleted]
[deleted]

This makes a lot of sense. Recording execution + replay is exactly what’s missing once you move past simple logging.

One thing I’ve found tricky in similar setups is making sure the trace is captured before side-effects happen, otherwise replay can lie to you. If you get that boundary right, the prod → replay → fix → verify loop becomes much more reliable.

Really like the direction.