> Time to fetch version data for each one of those packages: ~12 hours (yikes)

The author could improve the batching in fetchAllPackageData by not waiting for all 50 (BATCH_SIZE) promises to resolve at once. I just published a package for proper promise batching last week: https://www.npmjs.com/package/promises-batched

What's the benefit of promises like this here?

Just spin up a loop of 50 call chains. When one completes you just do the next on next tick. It's like 3 lines of code. No libraries needed. Then you're always doing 50 at a time. You can still use await.

async work() { await thing(); nextTick(work); }

for(to 50) { work(); }

then maybe a separate timer to check how many tasks are active I guess.

Promise.all waits for all 50 promises to resolve, so if one of these promises takes 3s, while the other 49 are taking 0.5s, you're waisting 2.5s awaiting each batch.

The implementation is rather simple, but more than 3 LoC: https://github.com/whilenot-dev/promises-batched/blob/main/s...

I know. My point is you can do better without a library.

Why not write all of our applications on one file? Why bother using (language specific) modules? To take your argument to the logical extreme, DRY is a fanatical doomsday computer science cult.

Worried about being rate limited or DoSing the server.

Sure, the need for backpressure occurs anyway, regardless of batching optimization.

Couldn't find any specific rate limit numbers besides the one mentioned here[0] from 2019:

> Up to five million requests to the registry per month are considered acceptable at this time

[0]: https://blog.npmjs.org/post/187698412060/acceptible-use.html

Ah this is cool, thanks!