I just hope with new lightweight threads I never have to write async reactive code again. It was a such an unproductive mistake, most applications do not need that level of complexity - now we can confidently say no applications need that level of added complexity.

Even for the applications that need reactive I do wonder if anyone has actually done the financial analysis between the extra developers and development time needed for reactive vs just throwing more servers at the problem.

C10k is basically just impractical without reactive / async / coroutines / that style. But with it it's extremely achievable on even midrange consumer hardware depending on the specific workload.

Nah, you could easily have much more than 10k threads on regular hardware for a long time with the traditional threads.

c10k was coined in the last millenium when a brand new PC would have 128 MB of memory and a single core 400 MHz cpu. And people were doing it with async IO, not threads back then. (Around the same time Java people got interested in Volanomark which is a similar thing but with threads - since Java didn't even have nonblocking IO then).

See eg this about 100k+ threads on Linux in 2002: https://lkml.iu.edu/hypermail/linux/kernel/0209.2/1153.html .. which mostly concerns itself with conserving memory address space since they are dealing with the 32-bit 4GB limitation of decades past.

(c10k was also about OS TCP stack limitations that were soon fixed)

C10k is pretty obsolete problem now, ten thousand threads is pretty trivial on modern machines. With modern load balancing you dont even have to worry about that problem any more. And that was before Java 25.

Modern servers yes, still has poor performance on consumer hardware even if it's technically achievable. That also disregards the kernel memory and thread contention overheads that eat into effective performance.

It's obsolete not just because of new hardware but because we got better ergonomics for these new programming styles.

That is completely different from what I’m discussing. You can have millions of virtual threads without ending up in reactive hell. Even still without virtual threads what’s more expensive 2x the dev team or a few more servers?

Depends on how many times the few more servers are duplicated. Getting an extra server for on-premises installed products is worse than pulling teeth. It’s not one server, it is one server times a thousand.

I can assure you the differences between virtual threads and reactive is not a thousand.

It was a better model for reasoning about concurrency, i.e. evaluate two expressions, rather than where virtual threads is headed, i.e. "you don't have to learn anything" - just spawn two threads, keep writing sequences of statements and pretend nothing's changed since the 90s.

https://www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-...

Well, I hoped too. The problem is that "lightweight" are not that lightweight, as they need garbage collection. So, in theory, one can create 100K threads on one machine, but in practice that's going to keep burning processor for GC cycles.

Another thing is what those lightweight threads are doing? If they play with CPU that's ok, you pay GC penalty and that's all. But if they access limited resources (database, another HTTP service), etc. in real application you face the standard issue: you cannot hit the targeted system with any data you want, this external system will backfire, sooner or later.

The good thing in reactive programming is that it does not try to pretend that above problem does not exist. It forces to handle errors, to handle backpressure, as those problems will not magically disappear when we switch to green threads, lightweight threads, etc. There is no free lunch here, network has its restrictions, databases has to write do disk eventually, and so on.

> So, in theory, one can create 100K threads on one machine, but in practice that's going to keep burning processor for GC cycles.

The focus on "100k threads" and GC overhead is a red herring. The real win isn't spawning a massive number of threads, but automatically yielding on network I/O, like e.g. goroutines do. In an I/O bound web application, you'd have a single virtual thread handling the whole request, just like a goroutine does. The GC overhead caused by the virtual thread is minuscule compared to the heap allocations caused by everything else going on in the request. If you really have a scenario for 100k virtual threads, they would not be short lived.

> But if they access limited resources (database, another HTTP service), etc. in real application you face the standard issue: you cannot hit the targeted system with any data you want

Then why would you do it? That sounds like an architectural problem, not a virtual thread problem. In an actor system, for example, you wouldn't hit the database directly from 100k different actors.

> The good thing in reactive programming is that it does not try to pretend that above problem does not exist.

This compares a high-level programming paradigm, complete with its own libraries and frameworks, to a single, low-level concurrency construct. The former is a layer of abstraction that hides complexity, while the latter is a fundamental building block that, by design, does not and cannot hide anything.

> It forces to handle errors, to handle backpressure, as those problems will not magically disappear when we switch to green threads, lightweight threads, etc.

Synchronous code handles errors in the most time-tested and understandable way there is. It is easy to reason about and easy to debug. Reactive programming requires explicit backpressure handling because its asynchronous nature creates the problem in the first place. The simplest form of "backpressure" in synchronous code with a limited amount of threads is the act of blocking. For anything more than that, there are the classic tools (blocking queues, semaphores...) or higher-level libraries built on top of them.

> The real win isn't spawning a massive number of threads, but automatically yielding on network I/O

This is of course what normal OS threads do as well, they get suspended when blocking on IO. Which is why 100k OS threads doing IO works fine too.

Yes. What I was trying to imply is that now there is a lightweight processing unit that still is able to suspend on IO (independently and without involvement from the OS scheduler), but can do that without relying on async/reactive patterns on code level. This required significant changes to the standard lib and runtime.