The killer upgrade here isn’t ESM. It’s Node baking fetch + AbortController into core. Dropping axios/node-fetch trimmed my Lambda bundle and shaved about 100 ms off cold-start latency. If you’re still npm i axios out of habit, 2025 Node is your cue to drop the training wheels.

16 years after launch, the JS runtime centered around network requests now supports network requests out of the box.

Obviously it supported network requests, the fetch api didn't even exist back then, and XMLHttpRequest which was the standard at the time is insane.

Insane but worked well. At least we could get download progress.

You can get download progress with fetch. You can't get upload progress.

Edit: Actually, you can even get upload progress, but the implementation seems fraught due to scant documentation. You may be better off using XMLHttpRequest for that. I'm going to try a simple implementation now. This has piqued my curiosity.

It took me a couple hours, but I got it working for both uploads and downloads with a nice progress bar. My uploadFile method is about 40 lines of formatted code, and my downloadFile method is about 28 lines. It's pretty simple once you figure it out!

Note that a key detail is that your server (and any intermediate servers, such as a reverse-proxy) must support HTTP/2 or QUIC. I spent much more time on that than the frontend code. In 2025, this isn't a problem for any modern client and hasn't been for a few years. However, that may not be true for your backend depending on how mature your codebase is. For example, Express doesn't support http/2 without another dependency. After fussing with it for a bit I threw it out and just used Fastify instead (built-in http/2 and high-level streaming). So I understand any apprehension/reservations there.

Overall, I'm pretty satisfied knowing that fetch has wide support for easy progress tracking.

can we see a gist?

https://github.com/hu0p/fetch-transfer-progress-demo/

Which browsers have you tested this in? I ran the feature detection script from the Chrome docs and neither Safari nor Firefox seem to support fetch upload streaming: https://developer.chrome.com/docs/capabilities/web-apis/fetc...

  const supportsRequestStreams = (() => {
    let duplexAccessed = false;
  
    const hasContentType = new Request('http://localhost', {
      body: new ReadableStream(),
      method: 'POST',
      get duplex() {
        duplexAccessed = true;
        return 'half';
      },
    }).headers.has('Content-Type');
  
    return duplexAccessed && !hasContentType;
  })();
Safari doesn't appear to support the duplex option (the duplex getter is never triggered), and Firefox can't even handle a stream being used as the body of a Request object, and ends up converting the body to a string, and then setting the content type header to 'text/plain'.

Oops. Chrome only! I stand very much corrected. Perhaps I should do less late night development.

It seems my original statement that download, but not upload, is well supported was unfortunately correct after all. I had thought that readable/transform streams were all that was needed, but as you noted it seems I've overlooked the important lack of duplex option support in Safari/Firefox[0][1]. This is definitely not wide support! I had way too much coffee.

Thank you for bringing this to my attention! After further investigation, I encountered the same problem as you did as well. Firefox failed for me exactly as you noted. Interestingly, Safari fails silently if you use a transformStream with file.stream().pipeThrough([your transform stream here]) but it fails with a message noting lack of support if you specifically use a writable transform stream with file.stream().pipeTo([writable transform stream here]).

I came across the article you referenced but of course didn't completely read it. It's disappointing that it's from 2020 and no progress has been made on this. Poking around caniuse, it looks like Safari and Firefox have patchy support for similar behavior in web workers, either via partial support or behind flags. So I suppose there's hope, but I'm sorry if I got anyone's hope too far up :(

[0] https://caniuse.com/mdn-api_fetch_init_duplex_parameter [1] https://caniuse.com/mdn-api_request_duplex

The thing I'm unsure about is if the streams approach is the same as the xhr one. I've no idea how the xhr one was accomplished or if it was even standards based in terms of impl - so my question is:

Does xhr track if the packet made it to the destination, or only that it was queued to be sent by the OS?

Sniped

Mainly by the fact that all the LLMs were saying it's not possible in addition to GP, and I just couldn't believe that was true...

Nerd

What a strange comment. You could always do network calls. Fetch is an API that has similar semantics across browser and server, using Promises.

tell me you're not a Node.js developer :)

Node always had a lower level http module.

Tangential, but thought I'd share since validation and API calls go hand-in-hand: I'm personally a fan of using `ts-rest` for the entire stack since it's the leanest of all the compile + runtime zod/json schema-based validation sets of libraries out there. It lets you plug in whatever HTTP client you want (personally, I use bun, or fastify in a node env). The added overhead is totally worth it (for me, anyway) for shifting basically all type safety correctness to compile time.

Curious what other folks think and if there are any other options? I feel like I've searched pretty exhaustively, and it's the only one I found that was both lightweight and had robust enough type safety.

Just last week I was about to integrate `ts-rest` into a project for the same reasons you mentioned above... before I realized they don't have express v5 support yet: https://github.com/ts-rest/ts-rest/issues/715

I think `ts-rest` is a great library, but the lack of maintenance didn't make me feel confident to invest, even if I wasn't using express. Have you ever considered building your own in-house solution? I wouldn't necessarily recommend this if you already have `ts-rest` setup and are happy with it, but rebuilding custom versions of 3rd party dependencies actually feels more feasible nowadays thanks to LLMs. I ended up building a stripped down version of `ts-rest` and am quite happy with it. Having full control/understanding of the internals feels very good and it surprisingly only took a few days. Claude helped immensely and filled a looot of knowledge gaps, namely with complicated Typescript types. I would also watch out for treeshaking and accidental client zod imports if you decide to go down this route.

I'm still a bit in shock that I was even able to do this, but yeah building something in-house is definitely a viable option in 2025.

ts-rest doesn't see a lot of support these days. It's lack of adoption of modern tanstack query integration patterns finally drove us look for alternatives.

Luckily, oRPC had progressed enough to be viable now. I cannot recommend it over ts-rest enough. It's essentially tRPC but with support for ts-rest style contracts that enable standard OpenAPI REST endpoints.

- https://orpc.unnoq.com/

- https://github.com/unnoq/orpc

First time hearing about oRPC, never heard of or used ts-rest and I'm a big fan of tRPC. Is the switch worth the time and energy?

If you're happy with tRPC and don't need proper REST functionality it might not be worth it.

However, if you want to lean that direction where it is a helpful addition they recently added some tRPC integrations that actually let you add oRPC alongside an existing tRPC setup so you can do so or support a longer term migration.

- https://orpc.unnoq.com/docs/openapi/integrations/trpc

Do you need an LLM for this? I've made my own in-house fork of a Java library without any LLM help. I needed apache.poi's excel handler to stream, which poi only supports in one direction. Someone had written a poi-compatible library that streamed in the other direction, but it had dependencies incompatible with mine. So I made my own fork with dependencies that worked for me. That got me out of mvn dependency hell.

Of course I'd rather not maintain my own fork of something that always should have been part of poi, but this was better than maintaining an impossible mix of dependencies.

For forking and changing a few things here and there, I could see how there might be less of a need for LLMs, especially if you know what you're doing. But in my case I didn't actually fork `ts-rest`, I built a much smaller custom abstraction from the ground-up and I don't consider myself to be a top-tier dev. In this case it felt like LLMs provided a lot more value, not necessarily because the problem was overly difficult but moreso because of the time saved. Had LLMs not existed, I probably would have never considered doing this as the opportunity cost would have felt too high (i.e. DX work vs critical user-facing work). I estimate it would have taken me ~2 weeks or more to finish the task without LLMs, whereas with LLMs it only took a few days.

I do feel we're heading in a direction where building in-house will become more common than defaulting to 3rd party dependencies—strictly because the opportunity costs have decreased so much. I also wonder how code sharing and open source libraries will change in the future. I can see a world where instead of uploading packages for others to plug into their projects, maintainers will instead upload detailed guides on how to build and customize the library yourself. This approach feels very LLM friendly to me. I think a great example of this is with `lucia-auth`[0] where the maintainer deprecated their library in favour of creating a guide. Their decision didn't have anything to do with LLMs, but I would personally much rather use a guide like this alongside AI (and I have!) rather than relying on a 3rd party dependency whose future is uncertain.

[0] https://lucia-auth.com/

nvm I'm dumb lol, `ts-rest` does support express v5: https://github.com/ts-rest/ts-rest/pull/786. Don't listen to my misinformation above!!

I would say this oversight was a blessing in disguise though, I really do appreciate minimizing dependencies. If I could go back in time knowing what I know now, I still would've gone down the same path.

I've been impressed with Hono's zod Validator [1] and the type-safe "RPC" clients [2] you can get from it. Most of my usage of Hono has been in Deno projects, but it seems like it has good support on Node and Bun, too.

[1] https://hono.dev/docs/guides/validation#zod-validator-middle...

[2] https://hono.dev/docs/guides/rpc#client

Agreed. Hono has been great for my usage, and very portable.

Type safety for API calls is huge. I haven't used ts-rest but the compile-time validation approach sounds solid. Way better than runtime surprises. How's the experience in practice? Do you find the schema definition overhead worth it or does it feel heavy for simpler endpoints?

I always try to throw schema validation of some kind in API calls for any codebase I really need to be reliable.

For prototypes I'll sometimes reach for tRPC. I don't like the level of magic it adds for a production app, but it is really quick to prototype with and we all just use RPC calls anyway.

For procudtion I'm most comfortable with zod, but there are quite a few good options. I'll have a fetchApi or similar wrapper call that takes in the schema + fetch() params and validates the response.

How do you supply the schema on the other side?

I found that keeping the frontend & backend in sync was a challenge so I wrote a script that reads the schemas from the backend and generated an API file in the frontend.

There are a few ways, but I believe SSOT (single source of truth) is key, as others basically said. Some ways:

1. Shared TypeScript types

2. tRPC/ts-rest style: Automagic client w/ compile+runtime type safety

3. RTK (redux toolkit) query style: codegen'd frontend client

I personally I prefer #3 for its explicitness - you can actually review the code it generates for a new/changed endpoint. It does come w/ downside of more code + as codebase gets larger you start to need a cache to not regenerate the entire API every little change.

Overall, I find the explicit approach to be worth it, because, in my experience, it saves days/weeks of eng hours later on in large production codebases in terms of not chasing down server/client validation quirks.

What is a validation quirk that would happen when using server side Zod schemas that somehow doesn’t happen with a codegened client?

I'll almost always lean on separate packages for any shared logic like that (at least if I can use the same language on both ends).

For JS/TS, I'll have a shared models package that just defines the schemas and types for any requests and responses that both the backend and frontend are concerned with. I can also define migrations there if model migrations are needed for persistence or caching layers.

It takes a bit more effort, but I find it nicer to own the setup myself and know exactly how it works rather than trusting a tool to wire all that up for me, usually in some kind of build step or transpiration.

Write them both in TypeScript and have both the request and response shapes defined as schemas for each API endpoint.

The server validates request bodies and produces responses that match the type signature of the response schema.

The client code has an API where it takes the request body as its input shape. And the client can even validate the server responses to ensure they match the contract.

It’s pretty beautiful in practice as you make one change to the API to say rename a field, and you immediately get all the points of use flagged as type errors.

This will break old clients. Having a deployment stategy taking that into account is important.

Effect provides a pretty good engine for compile-time schema validation that can be composed with various fetching and processing pipelines, with sensible error handling for cases when external data fails to comply with the schema or when network request fails.

The schema definition is more efficient than writing input validation from scratch anyway so it’s completely win/win unless you want to throw caution to the wind and not do any validation

Also want to shout out ts-rest. We have a typescript monorepo where the backend and frontend import the api contract from a shared package, making frontend integration both type-safe and dead simple.

I migrated from ts-rest to Effect/HttpApi. It's an incredible ecosystem, and Effect/Schema has over taken my domain layer. Definitely a learning curve though.

For what it's worth, happy user of ts-rest here. Best solution I landed upon so far.

I never really liked the syntax of fetch and the need to await for the response.json, implementing additional error handling -

  async function fetchDataWithAxios() {
    try {
      const response = await axios.get('https://jsonplaceholder.typicode.com/posts/1');
      console.log('Axios Data:', response.data);
    } catch (error) {
      console.error('Axios Error:', error);
    }
  }



  async function fetchDataWithFetch() {
    try {
      const response = await fetch('https://jsonplaceholder.typicode.com/posts/1');

      if (!response.ok) { // Check if the HTTP status is in the 200-299 range
        throw new Error(`HTTP error! status: ${response.status}`);
      }

      const data = await response.json(); // Parse the JSON response
      console.log('Fetch Data:', data);
    } catch (error) {
      console.error('Fetch Error:', error);
    }
  }

While true, in practice you'd only write this code once as a utility function; compare two extra bits of code in your own utility function vs loading 36 kB worth of JS.

Yeah, that's the classic bundle size vs DX trade-off. Fetch definitely requires more boilerplate. The manual response.ok check and double await is annoying. For Lambda where I'm optimizing for cold starts, I'll deal with it, but for regular app dev where bundle size matters less, axios's cleaner API probably wins for me.

Agreed, but I think that in every project I've done I've put at least a minimal wrapper function around axios or fetch - so adding a teeny bit more to make fetch nicer feels like tomayto-tomahto to me.

You’re shooting yourself in the foot if you put naked fetch calls all over the place in your own client SDK though. Or at least going to extra trouble for no benefit

why don't they just set those as options

{ throwNotOk, parseJson }

they know that's 99% of fetch calls, i do t see why it can't be baked in.

I somehow don't get your point.

The following seems cleaner than either of your examples. But I'm sure I've missed the point.

  fetch(url).then(r=>r.ok ? r.json() : Promise.reject(r.status))
  .then(
    j=>console.log('Fetch Data:', j),
    e=>console.log('Fetch Error:', e)
  );
I share this at the risk of embarrassing myself in the hope of being educated.

Depends on your definition of clean, I consider this to be "clever" code, which is harder to read at a glance.

You'd probably put the code that runs the request in a utility function, so the call site would be `await myFetchFunction(params)`, as simple as it gets. Since it's hidden, there's no need for the implementation of myFetchFunction to be super clever or compact; prefer readability and don't be afraid of code length.

Except you might want different error handling for different error codes. For example, our validation errors return a JSON object as well but with 422.

So treating "get a response" and "get data from a response" separately works out well for us.

I usually write it like:

    const data = (await fetch(url)).then(r => r.json())

But it's very easy obviously to wrap the syntax into whatever ergonomics you like.

You don't need all those parens:

  await fetch(url).then(r => r.json())

why not?

    const data = await (await fetch(url)).json()

That's very concise. Still, the double await remains weird. Why is that necessary?

The first `await` is waiting for the response-headers to arrive, so you know the status code and can decide what to do next. The second `await` is waiting for the full body to arrive (and get parsed as JSON).

It's designed that way to support doing things other than buffering the whole body; you might choose to stream it, close the connection early etc. But it comes at the cost of awkward double-awaiting for the common case (always load the whole body and then decide what happens next).

So you can say:

    let r = await fetch(...);
    if(!r.ok) ...
    let len = response.headers.get("Content-Length");
    if(!len || new Number(len) > 1000 * 1000)
        throw new Error("Eek!");

It isn't, the following works fine...

    var data = await fetch(url).then(r => r.json());
Understanding Promises/A (thenables) and async/await can sometimes be difficult or confusing, especially when mixing the two like above.

IMU because you don't necessarily want the response body. The first promise resolves after the headers are received, the .json() promise resolves only after the full body is received (and JSON.parse'd, but that's sync anyway).

Honestly it feels like yak shaving at this point; few people would write low-level code like this very often. If you connect with one API, chances are all responses are JSON so you'd have a utility function for all requests to that API.

Code doesn't need to be concise, it needs to be clear. Especially back-end code where code size isn't as important as on the web. It's still somewhat important if you run things on a serverless platform, but it's more important then to manage your dependencies than your own LOC count.

There has to be something wrong with a tech stack (Node + Lambda) that adds 100ms latency for some requests, just to gain the capability [1] to send out HTTP requests within an environment that almost entirely communicates via HTTP requests.

[1] convenient capability - otherwise you'd use XMLHttpRequest

1. This is not 100ms latency for requests. It's 100ms latency for the init of a process that loads this code. And this was specifically in the context of a Lambda function that may only have 128MB RAM and like 0.25vCPU. A hello world app written in Java that has zero imports and just prints to stdout would have higher init latency than this.

2. You don't need to use axios. The main value was that it provides a unified API that could be used across runtimes and has many convenient abstractions. There were plenty of other lightweight HTTP libs that were more convenient than the stdlib 'http' module.

On init lambda funcs run a full core, but on invoke the 128MB run at 1/20 core.

node fetch is WAY better than axios (easier to use/understand, simpler); didn't really know people were still using axios

I do miss the axios extensions tho, it was very easy to add rate-limits, throttling, retry strategies, cache, logging ..

You can obviously do that with fetch but it is more fragmented and more boilerplate

Totally get that! I think it depends on your context. For Lambda where every KB and millisecond counts, native fetch wins, but for a full app where you need robust HTTP handling, the axios plugin ecosystem was honestly pretty nice. The fragmentation with fetch libraries is real. You end up evaluating 5 different retry packages instead of just grabbing axios-retry.

Sounds like there's space for an axios-like library built on top of fetch.

I think that's the sweet spot. Native fetch performance with axios-style conveniences. Some libraries are moving in that direction, but nothing's really nailed it yet. The challenge is probably keeping it lightweight while still solving the evaluating 5 retry packages problem.

Is this what you're looking for? https://www.npmjs.com/package/ky

I haven't used it but the weekly download count seems robust.

Ky is definitely one of the libraries moving in that direction. Good adoption based on those download numbers, but I think the ecosystem is still a bit fragmented. You've got ky, ofetch, wretch, etc. all solving similar problems. But yeah, ky is probably the strongest contender right now, in my opinion.

Like axios can do it if you specify the fetch backend, it just won't do the .json() asynchronously.

I'm actually not a big fan of the async .json from fetch, because when it fails (because "not json"), then you can't peak at the text instead. Of course, you can clone the response, apparently, and then read text from the clone... and if you're wrapping for some other handling, it isn't too bad.

[dead]

You still see axios used in amateur tutorials and stuff on dev.to and similar sites. There’s also a lot of legacy out there.

AI is going to bring that back like an 80s disco playing Wham. If you gonna do it do it wrong...

I've had Claude decide to replace my existing fetch-based API calls with Axios (not installed or present at all in the project), apropos of nothing during an unrelated change.

I had Gemini correct my code using Google's new LLM API to use the old one.

hahaha, I see it all the time in my responses. I immediately reject.

Interceptors (and extensions in general) are the killer feature for axios still. Fetch is great for scripts, but I wouldn't build an application on it entirely; you'll be rewriting a lot or piecing together other libs.

Right?! I think a lot of devs got stuck in the axios habit from before Node 18 when fetch wasn't built-in. Plus axios has that batteries included feel with interceptors, auto-JSON parsing, etc. But for most use cases, native fetch + a few lines of wrapper code beats dragging in a whole dependency.

This is all very good news. I just got an alert about a vulnerability in a dependency of axios (it's an older project). Getting rid of these dependencies is a much more attractive solution than merely upgrading them.

isn't upgrading node going to ba bigger challenge? (if you're on a node version that's no longer receiving maintenance)

No idea how much compatibility breakage there is, but it's probably going to have to happen at some point, and reducing dependencies sounds worth it to me.

axios got discontinued years ago I thought, nobody should still be using it!

No? Its last update was 12 days ago

what about interceptors?

As a library author it's the opposite, while fetch() is amazing, ESM has been a painful but definitely worth upgrade. It has all the things the author describes.

Interesting to get a library author's perspective. To be fair, you guys had to deal with the whole ecosystem shift: dual package hazards, CJS/ESM compatibility hell, tooling changes, etc so I can see how ESM would be the bigger story from your perspective.

I'm a small-ish time author, but it was really painful for a while since we were all dual-publishing in CJS and ESM, which was a mess. At some point some prominent authors decided to go full-ESM, and basically many of us followed suit.

The fetch() change has been big only for the libraries that did need HTTP requests, otherwise it hasn't been such a huge change. Even in those it's been mostly removing some dependencies, which in a couple of cases resulted in me reducing the library size by 90%, but this is still Node.js where that isn't such a huge deal as it'd have been on the frontend.

Now there's an unresolved one, which is the Node.js streams vs WebStreams, and that is currently a HUGE mess. It's a complex topic on its own, but it's made a lot more complex by having two different streaming standards that are hard to match.

What a dual-publishing nightmare. Someone had to break the stalemate first. 90% size reduction is solid even if Node bundle size isn't as critical. The streams thing sounds messy, though. Two incompatible streaming standards in the same runtime is bound to create headaches.

I maintain a library also, and the shift to ESM was incredibly painful, because you still have to ship CJS, only now you have work out how to write the code in a way that can be bundled either way, can be tested, etc etc.

It was a pain, but rollup can export both if you write the source in esm. The part I find most annoying is exporting the typescript types. There's no tree-shaking for that!

For simple projects you needed now to add rollup or other build system that didn't have or need it before. For complex systems (with non-trivial exports), now you have a mess since it wouldn't work straight away.

Now with ESM if you write plain JS it works again. If you use Bun, it also works with TS straight away.

This is where I actually appreciated Deno's start with a clean break from npm, and later in pushing jsr. I'm mixed on how much of Node has come into Deno, however.

The fact that CJS/ESM compatibility issues are going away indicates it was always a design choice and never a technical limitation (most CJS format code can consume ESM and vice versa). So much lost time to this problem.

It was neither a design choice nor a technical limitation. It was a big complicated thing which necessarily involved fiddly internal work and coordination between relatively isolated groups. It got done when someone (Joyee Cheung) actually made the fairly heroic effort to push through all of that.

Joyee has a nice post going into details. Reading this gives a much more accurate picture of why things do and don't happen in big projects like Node: https://joyeecheung.github.io/blog/2024/03/18/require-esm-in...

Node.js made many decisions that have massive impact on ESM adoption. From forcing extensions and dropping index.js to loaders and complicated package.json "exports". In addition to node.js steamrolling everyone, tc39 keep making are idiotic changes to spec like `deffered import` and `with` syntax changes.

Requiring file extensions and not supporting automatic "index" imports was a requirement from Browsers where you can't just scan a file system and people would be rightfully upset if their browser modules sent 4-10 HEAD requests to find the file it was looking for.

"exports" controls in package.json was something package/library authors had been asking for for a long time even under CJS regimes. ESM gets a lot of blame for the complexity of "exports", because ESM packages were required to use it but CJS was allowed to be optional and grandfathered, but most of the complexity in the format was entirely due to CJS complexity and Node trying to support all the "exports" options already in the wild in CJS packages. Because "barrel" modules (modules full of just `export thing from './thing.js'`) are so much easier to write in ESM I've yet to see an ESM-only project with a complicated "exports". ("exports" is allowed to be as simple as the old main field, just an "index.js", which can just be an easily written "barrel" module).

> tc39 keep making are idiotic changes to spec like `deffered import` and `with` syntax changes

I'm holding judgment on deferred imports until I figure out what use cases it solves, but `with` has been a great addition to `import`. I remember the bad old days of crazy string syntaxes embedded in module names in AMD loaders and Webpack (like the bang delimited nonsense of `json!embed!some-file.json` and `postcss!style-loader!css!sass!some-file.scss`) and how hard it was to debug them at times and how much they tied you to very specific file loaders (clogging your AMD config forever, or locking you to specific versions of Webpack for fear of an upgrade breaking your loader stack). Something like `import someJson from 'some-file.json' with { type: 'json', webpackEmbed: true }` is such a huge improvement over that alone. The fact that it is also a single syntax that looks mostly like normal JS objects for other very useful metadata attribute tools like bringing integrity checks to ESM imports without an importmap is also great.

You're right. It wasn't a design choice or technical limitation, but a troubling third thing: certain contributors consistently spreading misinformation about ESM being inherently async (when it's only conditionally async), and creating a hostile environment that “drove contributors away” from ESM work - as the implementer themselves described.

Today, no one will defend ERR_REQUIRE_ESM as good design, but it persisted for 5 years despite working solutions since 2019. The systematic misinformation in docs and discussions combined with the chilling of conversations suggests coordinated resistance (“offline conversations”). I suspect the real reason for why “things do and don’t happen” is competition from Bun/Deno.

There were some legitimate technical decisions, that said, imho, Node should have just stayed compatible with Babel's implementation and there would have been significantly less friction along the way. It was definitely a choice not to do so, for better and worse.

It's interesting to see how many ideas are being taken from Deno's implementations as Deno increases Node interoperability. I still like Deno more for most things.

Those... are not mutually exclusive as killer upgrade. No longer having to use a nonsense CJS syntax is absolutely also a huge deal.

Web parity was "always" going to happen, but the refusal to add ESM support, and then when they finally did, the refusal to have a transition plan for making ESM the default, and CJS the fallback, has been absolutely grating for the last many years.

Especially since it seems perfectly possible to support both simultaneously. Bun does it. If there's an edge case, I still haven't hit it.

With node:fetch you're going to have to write a wrapper for error handling/logging/retries etc. in any app/service of size. After a while, we ended up with something axios/got-like anyway that we had to fix a bunch of bugs in.

And AFAIK there is still no upload progress with fetch.

It has always astonished me that platforms did not have first class, native "http client" support. Pretty much every project in the past 20 years has needed such a thing.

Also, "fetch" is lousy naming considering most API calls are POST.

That's a category error. Fetch is just refers to making a request. POST is the method or the HTTP verb used when making the request. If you're really keen, you could roll your own

  const post = (url) => fetch(url, {method:"POST"})

I read this as OP commenting on the double meaning of the category. In English, “fetch” is a synonym of “GET”, so it’s silly that “fetch” as a category is independent of the HTTP method

That makes sense.

“Most” is doing a lot of heavy lifting here. I use plenty of APIs that are GET

I was thinking server-side where it's probably 90%+ POST. True, client-side a lot different (node the only player client-side, though).

Node was created with first-class native http server and client support. Wrapper libraries can smooth out some rough edges with the underlying api as well as make server-side js (Node) look/work similar to client-side js (Browser).

[deleted]

Undici in particular is very exciting as a built-in request library, https://undici.nodejs.org

Undici is solid. Being the engine behind Node's fetch is huge. The performance gains are real and having it baked into core means no more dependency debates. Plus, it's got some great advanced features (connection pooling, streams) if you need to drop down from the fetch API. Best of both worlds.

It's into core but not exposed to users directly. you still need to install the npm module if you want to use it, which is required if you need for example to go through an outgoing proxy in your production environment

This has been the case for quite awhile, most of the things in this article aren’t brand new

It kills me that I keep seeing axios being used instead of fetch, it is like people don't care, copy-paste existing projects as starting point and that is it.

Maybe I'm wrong and it's been updated but doesn't axios support progress indicators out of the box and just generally cleaner?

That's said there are npm packages that are ridiculously obsolete and overused.

May be, however I seldom see those things being used, it is always just request response kind of workflow.

axios works for both node and browser in production code, not sure if fetch can do as much as axios in browser though

[deleted]