First, in all benchmarks but two, Java performs just as well as C/C++/Rust, and in one of those two, Go performs as well as the low-level languages. Second, I don't know the details of that one benchmark where the low-level languages indeed perform better than high-level ones, but I don't see any reason to believe it has anything to do with virtual threads.
Modern Java GCs typically offer a boost over more manual memory management. And on latency, even if virtual were very inefficient and you'd add a GC pause with Java's new GCs, you'd still be well below 1ms, i.e. not a dominant factor in a networked program.
(Yes, there's still one cause for potential lower throughput in Java, which is the lack of inlined objects in arrays, but that will be addressed soon, and isn't a big factor in most server applications anyway or related to IO)
BTW, writing a program in C++ has always been more or less as easy as writing it in Java/C# etc.; the big cost of C++ is in evolution and refactoring over many years, because in low-level languages local changes to code have a much more global impact, and that has nothing to do with the design of the language but is an essential property of tracking memory management at the code level (unless you use smart pointers, i.e. a refcounting GC for everything, but then things will be really slow, as refcounting does sacrifice performance in its goal of minimising footprint).
A 1-millisecond pause is an eternity. That’s disk access latencies. Unless your computation is completely and unavoidably dominated by slow network, that latency will have a large impact on performance.
Ironically, Java has okay performance for pure computation. Where it shows poorly is I/O intensive applications. Schedule quality, which a GC actively interferes with, has a much bigger impact on performance for I/O intensive applications than operation latency (which can be cheaply hidden).
> A 1-millisecond pause is an eternity
Who said anything about a 1ms pause? I said that even if virtual thread schedulers had terrible latencies (which they don't) and you added GC pauses, you'd still be well below 1ms, which is not an eternity in the context of network IO, which is what we're talking about here.
to be fair, 1ms is an eternity for network IO as well. Only over the internet is considered acceptable.
It is not "an eternity". A roundtrip of 100-200us - which is closer to the actual GC pause time these days (remember, I said well below 1ms) - is considered quite good and is within the 1ms order of magnitude. Getting a <<1ms pause once every several seconds is not a significant impact to all but a few niche programs, and you may even get better throughput. OS-imposed hiccups (such as page faults or scheduler decisions) are about the same as those caused by today's Java GCs. Programs for which these are "an eternity" don't use regular kernels.
Performance without a goal is wasted effort, sometimes that 1-millisecond matters, most of the time it doesn't, hence why everyone is using Web browsers with applications written in a dynamic language even worse GC pauses.
Any gc pause is unacceptable if your goal is predictable throughput and latency
Modern gcs can be pauseless, but either way you’re spending CPU on gc and not servicing requests/customers.
As for c++, std::unique_ptr has no ref counting at all.
shared_ptr does, but that’s why you avoid it at all costs if you need to move things around. you only pay the cost when copying the shared_ptr itself, but you almost never need a shared_ptr and even when you need it, you can always avoid copying in the hot path
> Modern gcs can be pauseless, but either way you’re spending CPU on gc and not servicing requests/customers.
Since memory is finite and all computation uses some, every program spends CPU on memory management regardless of technique. Tracing GCs often spend less CPU on memory management than low-level languages.
> std::unique_ptr has no ref counting at all.
It still needs to do work to free the memory. Tracing GCs don't. The whole point of tracing GCs is that they spend work on keeping objects alive, not on freeing memory. As the size of the working set is pretty much constant for a given program and the frequency of GC is the ratio of allocation rate (also constant) to heap size, you can arbitrarily reduce the amount of CPU spent on memory management by increasing the heap.