> Node was literally designed to be good for one thing - backend web service development.

I don't think it was, at least not originally, But even if it was, that doesn't mean it actually is good, and certainly not for all cases.

> Node running on a potato of a CPU can handle thousands of requests per second w/o breaking a sweat using the most naively written code.

The parent comment is specifically about this. It breaks down at a certain point.

> you can get a full ExpressJS service up and running, including auth, in less than a dozen lines of code

Ease of use is nice for a start, but usually becomes technical debt. E.g., you can write a pretty small search algorithm, but it will perform terribly. Not a problem at the start. You can set up a service with a just a little bit of code in any major language using some framework. Heck, there are code free servers. But you will have to add more and more work-arounds as the application grows. There's no free lunch.

> The TS team switched to go because JS is horrible at anything that isn't strings or doubles.

They switched because V8 is too slow and uses quite a bit of memory. At least, that's what they wrote. But that was not what I wanted to address. I was trying to say that if you have to switch, Go is a decent option, because it's so close to JS/TS.

> But for the majority of service endpoints ...

Because they are simple, as you say. But when you run into problems, asking the V8 team to bail you out with a few more hacks doesn't seem right.

> Ease of use is nice for a start, but usually becomes technical debt.

The difference with the Express ecosystem is that you aren't getting any less power than with FastAPI or Spring Boot, you just get less overhead. Spring Boot has 10x the config to get the same endpoint up and running as Express, and FastAPI has at least 3x the magic. Now some of FastAPI's magic is really useful (auto converting pydantic types to JSON Schemas on endpoints, auto generating API docs, etc), but it is still magic compared to what Express gets you.

The scaling story of Node is also really easy to think about and do capacity planning for. You aren't worried about contention or IPC (as this thread has pointed out, if you are doing IPC in Node you are in for a bad time, so just don't!), your unit of scaling is the Node process itself. Throw it in a docker image, throw that in a k8s cluster, assign .25 CPU to each instance. Scale up and down as needed.

Sometimes having one really damn simple and easy to understand building block is more powerful than having 500 blocks that can be misconfigured in ten thousand different ways.