Tangential, but thought I'd share since validation and API calls go hand-in-hand: I'm personally a fan of using `ts-rest` for the entire stack since it's the leanest of all the compile + runtime zod/json schema-based validation sets of libraries out there. It lets you plug in whatever HTTP client you want (personally, I use bun, or fastify in a node env). The added overhead is totally worth it (for me, anyway) for shifting basically all type safety correctness to compile time.

Curious what other folks think and if there are any other options? I feel like I've searched pretty exhaustively, and it's the only one I found that was both lightweight and had robust enough type safety.

Just last week I was about to integrate `ts-rest` into a project for the same reasons you mentioned above... before I realized they don't have express v5 support yet: https://github.com/ts-rest/ts-rest/issues/715

I think `ts-rest` is a great library, but the lack of maintenance didn't make me feel confident to invest, even if I wasn't using express. Have you ever considered building your own in-house solution? I wouldn't necessarily recommend this if you already have `ts-rest` setup and are happy with it, but rebuilding custom versions of 3rd party dependencies actually feels more feasible nowadays thanks to LLMs. I ended up building a stripped down version of `ts-rest` and am quite happy with it. Having full control/understanding of the internals feels very good and it surprisingly only took a few days. Claude helped immensely and filled a looot of knowledge gaps, namely with complicated Typescript types. I would also watch out for treeshaking and accidental client zod imports if you decide to go down this route.

I'm still a bit in shock that I was even able to do this, but yeah building something in-house is definitely a viable option in 2025.

ts-rest doesn't see a lot of support these days. It's lack of adoption of modern tanstack query integration patterns finally drove us look for alternatives.

Luckily, oRPC had progressed enough to be viable now. I cannot recommend it over ts-rest enough. It's essentially tRPC but with support for ts-rest style contracts that enable standard OpenAPI REST endpoints.

- https://orpc.unnoq.com/

- https://github.com/unnoq/orpc

First time hearing about oRPC, never heard of or used ts-rest and I'm a big fan of tRPC. Is the switch worth the time and energy?

If you're happy with tRPC and don't need proper REST functionality it might not be worth it.

However, if you want to lean that direction where it is a helpful addition they recently added some tRPC integrations that actually let you add oRPC alongside an existing tRPC setup so you can do so or support a longer term migration.

- https://orpc.unnoq.com/docs/openapi/integrations/trpc

Do you need an LLM for this? I've made my own in-house fork of a Java library without any LLM help. I needed apache.poi's excel handler to stream, which poi only supports in one direction. Someone had written a poi-compatible library that streamed in the other direction, but it had dependencies incompatible with mine. So I made my own fork with dependencies that worked for me. That got me out of mvn dependency hell.

Of course I'd rather not maintain my own fork of something that always should have been part of poi, but this was better than maintaining an impossible mix of dependencies.

For forking and changing a few things here and there, I could see how there might be less of a need for LLMs, especially if you know what you're doing. But in my case I didn't actually fork `ts-rest`, I built a much smaller custom abstraction from the ground-up and I don't consider myself to be a top-tier dev. In this case it felt like LLMs provided a lot more value, not necessarily because the problem was overly difficult but moreso because of the time saved. Had LLMs not existed, I probably would have never considered doing this as the opportunity cost would have felt too high (i.e. DX work vs critical user-facing work). I estimate it would have taken me ~2 weeks or more to finish the task without LLMs, whereas with LLMs it only took a few days.

I do feel we're heading in a direction where building in-house will become more common than defaulting to 3rd party dependencies—strictly because the opportunity costs have decreased so much. I also wonder how code sharing and open source libraries will change in the future. I can see a world where instead of uploading packages for others to plug into their projects, maintainers will instead upload detailed guides on how to build and customize the library yourself. This approach feels very LLM friendly to me. I think a great example of this is with `lucia-auth`[0] where the maintainer deprecated their library in favour of creating a guide. Their decision didn't have anything to do with LLMs, but I would personally much rather use a guide like this alongside AI (and I have!) rather than relying on a 3rd party dependency whose future is uncertain.

[0] https://lucia-auth.com/

nvm I'm dumb lol, `ts-rest` does support express v5: https://github.com/ts-rest/ts-rest/pull/786. Don't listen to my misinformation above!!

I would say this oversight was a blessing in disguise though, I really do appreciate minimizing dependencies. If I could go back in time knowing what I know now, I still would've gone down the same path.

I've been impressed with Hono's zod Validator [1] and the type-safe "RPC" clients [2] you can get from it. Most of my usage of Hono has been in Deno projects, but it seems like it has good support on Node and Bun, too.

[1] https://hono.dev/docs/guides/validation#zod-validator-middle...

[2] https://hono.dev/docs/guides/rpc#client

Agreed. Hono has been great for my usage, and very portable.

Type safety for API calls is huge. I haven't used ts-rest but the compile-time validation approach sounds solid. Way better than runtime surprises. How's the experience in practice? Do you find the schema definition overhead worth it or does it feel heavy for simpler endpoints?

I always try to throw schema validation of some kind in API calls for any codebase I really need to be reliable.

For prototypes I'll sometimes reach for tRPC. I don't like the level of magic it adds for a production app, but it is really quick to prototype with and we all just use RPC calls anyway.

For procudtion I'm most comfortable with zod, but there are quite a few good options. I'll have a fetchApi or similar wrapper call that takes in the schema + fetch() params and validates the response.

How do you supply the schema on the other side?

I found that keeping the frontend & backend in sync was a challenge so I wrote a script that reads the schemas from the backend and generated an API file in the frontend.

There are a few ways, but I believe SSOT (single source of truth) is key, as others basically said. Some ways:

1. Shared TypeScript types

2. tRPC/ts-rest style: Automagic client w/ compile+runtime type safety

3. RTK (redux toolkit) query style: codegen'd frontend client

I personally I prefer #3 for its explicitness - you can actually review the code it generates for a new/changed endpoint. It does come w/ downside of more code + as codebase gets larger you start to need a cache to not regenerate the entire API every little change.

Overall, I find the explicit approach to be worth it, because, in my experience, it saves days/weeks of eng hours later on in large production codebases in terms of not chasing down server/client validation quirks.

What is a validation quirk that would happen when using server side Zod schemas that somehow doesn’t happen with a codegened client?

I'll almost always lean on separate packages for any shared logic like that (at least if I can use the same language on both ends).

For JS/TS, I'll have a shared models package that just defines the schemas and types for any requests and responses that both the backend and frontend are concerned with. I can also define migrations there if model migrations are needed for persistence or caching layers.

It takes a bit more effort, but I find it nicer to own the setup myself and know exactly how it works rather than trusting a tool to wire all that up for me, usually in some kind of build step or transpiration.

Write them both in TypeScript and have both the request and response shapes defined as schemas for each API endpoint.

The server validates request bodies and produces responses that match the type signature of the response schema.

The client code has an API where it takes the request body as its input shape. And the client can even validate the server responses to ensure they match the contract.

It’s pretty beautiful in practice as you make one change to the API to say rename a field, and you immediately get all the points of use flagged as type errors.

This will break old clients. Having a deployment stategy taking that into account is important.

Effect provides a pretty good engine for compile-time schema validation that can be composed with various fetching and processing pipelines, with sensible error handling for cases when external data fails to comply with the schema or when network request fails.

The schema definition is more efficient than writing input validation from scratch anyway so it’s completely win/win unless you want to throw caution to the wind and not do any validation

Also want to shout out ts-rest. We have a typescript monorepo where the backend and frontend import the api contract from a shared package, making frontend integration both type-safe and dead simple.

I migrated from ts-rest to Effect/HttpApi. It's an incredible ecosystem, and Effect/Schema has over taken my domain layer. Definitely a learning curve though.

For what it's worth, happy user of ts-rest here. Best solution I landed upon so far.