It seems to me that async io struggles whenever people try it.

For instance it is where Rust goes to die because it subverts the stack-based paradigm behind ownership. I used to find it was fun to write little applications like web servers in aio Python, particularly if message queues and websockets were involved, but for ordinary work you're better off using gunicorn. The trouble is that conventional async i/o solutions are all single threaded and in an age where it's common to have a 16 core machine on your desktop it makes no sense. It would be like starting a chess game dumping out all your pieces except for your King.

Unfashionable languages like Java and .NET that have quality multithreaded runtimes are the way to go because they provide a single paradigm to manage both concurrency and parallelism.

> Unfashionable languages like Java and .NET that have quality multithreaded runtimes are the way to go because they provide a single paradigm to manage both concurrency and parallelism.

First, that would be Java and Go, not Java and .NET, as .NET offers a separate construct (async/await) for high-throughput concurrency.

Second, while "unfashionable" in some sense, I guess, it's no wonder that Java is many times popular than any "fashionable" language. Also, if "fashionable" means "much discussed on HN", then that has historically been a terrible predictor of language success. There's almost an inverse correlation between how much a language is discussed on HN and its long-term success, and that's not surprising, as it's the less commonplace things that are more interesting to talk about. HN is more Vogue magazine than the New York Times.

Kind of, in .NET you also have structured concurrency and dataflow, which are another way without having to explicitly write async/await.

Yes, sadly Java and .NET are unfashionable in circles like HN, and recent SaaS startups, I keep seeing products that only offer nodejs based SDKs, when they offer Java/.NET SDKs they are generaly always outdated verus the nodejs one.

Hacker News has mostly discussed JavaScript and TypeScript over the past 15 years. These languages do seem to have some long-term succes.

JavaScript has success because it has a monopoly in the browser, anything you want to do there has to go through JavaScript, not because of any merit of the language.

I don't think so. Things built in those languages may have been discussed on HN, but the amount of discussion about those languages has not been proportional at all to their popularity.

In this context I interpret unfashionable as boring/normal/works/good enough/predictable etc.

> Unfashionable languages like Java and .NET that have quality multithreaded runtimes are the way to go because they provide a single paradigm to manage both concurrency and parallelism.

At the cost of not being able to actually provide the same throughput, latency, or memory usage that lower level languages that don't enforce the same performance pessimizing abstractions on everything can. Engineering is about tradeoffs but pretending like Java or .NET have solved this is naiive.

> At the cost of not being able to actually provide the same throughput, latency, or memory usage

Only memory usage is true with regards to Java in this context (.NET actually doesn't offer a shared thread abstraction; it's Java and Go that do), and even that is often misunderstood. Low-level languages are optimised for minimal memory usage, which is very important on RAM-constrained devices, but is could be wasting CPU on most machines: https://youtu.be/mLNFVNXbw7I

This optimisation for memory footprint also makes it harder for low-level languages to implement user-mode threading as efficiently as high-level languages.

Another matter is that there are two different use cases for asynchronous constructs that may tempt implementors to address them with a single implementation. One is the generator use case. What makes it special is that there are exactly two communicating parties, and both of their state may fit in the CPU cache. The other use case is general concurrency, primarily for IO. In that situation, a scheduler juggles a large number of user-mode threads, and because of that, there is likely a cache miss on every context switch, no matter how efficient it is. However, in the second case, almost all of the performance is due to Little's law rather than context switch time (see my explanation here: https://inside.java/2020/08/07/loom-performance/). That means that a "stackful" implementation of user-mode threads can have no significant performance penalty for the second use case (which, BTW, I think has much more value than the first), even though a more performant implementation is possible for the first use case. In Java we decided to tackle the second use case with virtual threads, and so far we've not offered something for the first (for which the demand is significantly lower). What happens in languages that choose to tackle both use cases with the same construct is that in the second and more important use case they gain no more than negligible performance (at best), but they're paying for that with a substantial degradation in user experience.

It sounds like you’re disagreeing yet no case is made that throughout and latency isn’t worse.

For example, the best frameworks on TechEmpower are all Rust, C and C++ with the best Java coming in at 25% slower on that microbenchmark. My point stands - it is generally true that well written rust/c/c++ outperforms well written Java and .Net and not just with lower memory usage. The “engineering effort per performance” maybe skews to Java but that’s different than absolute performance. With rust to me it’s also less clear if that is actually even true.

[1] https://www.techempower.com/benchmarks/#section=data-r23

This kind of discussions are always a wasted effort, because in the end we are all using Electron based apps, and Python scripting for AI tools.

It doesn't matter to win benchmarks games if the customer doesn't get what they need, but runs at blazing speed.

Honestly, if these languages are only winning by 25% in microbenchmarks, where I’d expect the difference to be biggest, that’s a strong boost for Java for me. I didn’t realise it was so close, and I hate async programming so I’m definitely not doing it for an, at most, 25% boost.

It’s not about the languages only, but also about runtimes and libraries. The vert.x vertices are reactive. Java devrel folks push everyone from reactive to virtual threads now. You won’t see it perform in that ballpark. If you look at the bottom of the benchmark results table, you’ll find Spring Boot (servlets and a bit higher Reactor), together with Django (Python). So “Java” in practice is different from niche Java. And if you look inside at the codebase, you’ll see the JVM options. In addition, they don’t directly publish CPU and memory utilization. You can extract it from the raw results, but it’s inconclusive.

This stops short of actually validating the benchmark payloads and hardware against your specific scenario.

> So “Java” in practice is different from niche Java.

This is an odd take, especially when in the discussion of Rust. In practice when talking about projects using Rust as an http server backend is non-existent in comparison. Does that mean we just get to write off the Rust benchmarks?

Java performs, as shown by the benchmarks.

I don’t understand what you’re saying. Typical Java is Spring Boot. Typical Rust is Axum and Actix. I don’t see why it would make sense to push the argument ad absurdum. Vert.x is not typical Java, its not easy to get it right. But Java the ecosystem profits from Netty in terms of performance, which does the best it can to avoid the JVM, the runtime system. And it’s not always about “HTTP servers” though that’s what that TechEmpower benchmark subject matter is about - frameworks, not just languages.

Your last sentence reads like an expression of faith. I’ll only remark that performance is relative to one’s project specs.

In some of those benchmarks, Quarkus (which is very much "typical Java") beats Axum, and there's far more software being written in "niche Java" than in "typical Rust". As for Netty, it's "avoiding the JVM" (standard library, really) less now, and to the extent that it still does, it might not be working in its favour. E.g. we've been able to get better results with plain blocking code and virtual threads than with Netty, except in situations where Netty's codecs have optimisations done over many years, and could have been equally applied to ordinary Java blocking code (as I'm sure they will be in due time).

I didn’t make the claim that it’s worth it. But when it is absolutely needed Java has no solution.

And remember, we’re talking about a very niche and specific I/O microbenchmark. Start looking at things like SIMD (currently - I know Java is working on it) or in general more compute bound and the gap will widen. Java still doesn’t yet have the tools to write really high performance code.

But it does. Java already gives you direct access sto SIMD, and the last major hurdle to 100% of hardware performance with idiomatic code, flattened structs, will be closed very soon. The gap has been closing steadily, and there's no sign of change in the trend. Actually, it's getting harder and harder to find cases where a gap exists at all.

It is called JNI, or Panama nowadays.

Too many people go hard on must be 100% pure, meanwhile Python is taking over the AI world, via native library bindings.

First, in all benchmarks but two, Java performs just as well as C/C++/Rust, and in one of those two, Go performs as well as the low-level languages. Second, I don't know the details of that one benchmark where the low-level languages indeed perform better than high-level ones, but I don't see any reason to believe it has anything to do with virtual threads.

Modern Java GCs typically offer a boost over more manual memory management. And on latency, even if virtual were very inefficient and you'd add a GC pause with Java's new GCs, you'd still be well below 1ms, i.e. not a dominant factor in a networked program.

(Yes, there's still one cause for potential lower throughput in Java, which is the lack of inlined objects in arrays, but that will be addressed soon, and isn't a big factor in most server applications anyway or related to IO)

BTW, writing a program in C++ has always been more or less as easy as writing it in Java/C# etc.; the big cost of C++ is in evolution and refactoring over many years, because in low-level languages local changes to code have a much more global impact, and that has nothing to do with the design of the language but is an essential property of tracking memory management at the code level (unless you use smart pointers, i.e. a refcounting GC for everything, but then things will be really slow, as refcounting does sacrifice performance in its goal of minimising footprint).

A 1-millisecond pause is an eternity. That’s disk access latencies. Unless your computation is completely and unavoidably dominated by slow network, that latency will have a large impact on performance.

Ironically, Java has okay performance for pure computation. Where it shows poorly is I/O intensive applications. Schedule quality, which a GC actively interferes with, has a much bigger impact on performance for I/O intensive applications than operation latency (which can be cheaply hidden).

> A 1-millisecond pause is an eternity

Who said anything about a 1ms pause? I said that even if virtual thread schedulers had terrible latencies (which they don't) and you added GC pauses, you'd still be well below 1ms, which is not an eternity in the context of network IO, which is what we're talking about here.

to be fair, 1ms is an eternity for network IO as well. Only over the internet is considered acceptable.

It is not "an eternity". A roundtrip of 100-200us - which is closer to the actual GC pause time these days (remember, I said well below 1ms) - is considered quite good and is within the 1ms order of magnitude. Getting a <<1ms pause once every several seconds is not a significant impact to all but a few niche programs, and you may even get better throughput. OS-imposed hiccups (such as page faults or scheduler decisions) are about the same as those caused by today's Java GCs. Programs for which these are "an eternity" don't use regular kernels.

Performance without a goal is wasted effort, sometimes that 1-millisecond matters, most of the time it doesn't, hence why everyone is using Web browsers with applications written in a dynamic language even worse GC pauses.

Any gc pause is unacceptable if your goal is predictable throughput and latency

Modern gcs can be pauseless, but either way you’re spending CPU on gc and not servicing requests/customers.

As for c++, std::unique_ptr has no ref counting at all.

shared_ptr does, but that’s why you avoid it at all costs if you need to move things around. you only pay the cost when copying the shared_ptr itself, but you almost never need a shared_ptr and even when you need it, you can always avoid copying in the hot path

> Modern gcs can be pauseless, but either way you’re spending CPU on gc and not servicing requests/customers.

Since memory is finite and all computation uses some, every program spends CPU on memory management regardless of technique. Tracing GCs often spend less CPU on memory management than low-level languages.

> std::unique_ptr has no ref counting at all.

It still needs to do work to free the memory. Tracing GCs don't. The whole point of tracing GCs is that they spend work on keeping objects alive, not on freeing memory. As the size of the working set is pretty much constant for a given program and the frequency of GC is the ratio of allocation rate (also constant) to heap size, you can arbitrarily reduce the amount of CPU spent on memory management by increasing the heap.

I honestly doubt any of the frameworks in that benchmark are using virtual threads yet. The top one is still using vert.x which is an event loop on native platform threads.

What matters is if it is good enough for project acceptance criteria.

> It seems to me that async io struggles whenever people try it.

Promises work great in javascript, either in the browser or in node/bun. They're easy to use, and easy to reason about (once you understand them). And the language has plenty of features for using them in lots of ways - for example, Promise.all(), "for await" loops, async generators and so on. I love this stuff. Its fast, simple to use and easy to reason about (once you understand it).

Personally I've always thought the "function coloring problem" was overstated. I'm happy to have some codepaths which are async and some which aren't. Mixing sync and async code willy nilly is a code smell.

Personally I'd be happy to see more explicit effects (function colors) in my languages. For example, I'd like to be able to mark which functions can't panic. Or effects for non-divergence, or capability safety, and so on.

Promises in JS are particularly easy because JS is single-threaded. You can be certain that your execution flow won't be preepted at an arbitrary point. This greatly reduces the need for locks, atomics, etc.

Also task-local variables, which almost all systems other than C-level threads basically give up on despite being widely demanded.

.NET has had task-local vars for about a decade now: https://learn.microsoft.com/en-us/dotnet/api/system.threadin...

Python added them in 3.7: https://docs.python.org/3/library/contextvars.html

I'll admit to unfamiliarity with the .NET version, but for Python even `threading.local` is a useless implementation if you care at all about performance.

Performant thread-local variables require ahead-of-time mapping to a 1-or-2-level integer sequence with a register to quickly the base array, and some kind of trap to handle the "not allocated" case. Task-local variables are worse than thread-locals since they are swapped out much more frequently.

This requires special compiler support, not being a mere library.

I would argue that if you're using Python, you already don't care about performance (unless it's just a little glue between other things).

In .NET they do virtual dispatch via a very basic map-like interface that has a bunch of micro-optimized implementations that are swapped in and out as needed if new items are added. For N up to 4 variables, they use a dedicated implementation that stores them as fields and does simple branching to access the right one, for each N. Beyond that it becomes an array, and at some point, a proper Dictionary. I don't know the exact perf characteristics, but FWIW I don't recall that ever being a source of an actual, non-hypothetical perf problem. Usually you'll have one local that is an object with a bunch of fields, so you only need one lookup to fetch that, and from there it's as fast as field access.

> Promises work great in javascript, either in the browser or in node/bun.

I can't disagree more. They suffer from the same stuff rust async does: they mess with the stack trace and obscure the actual guarantees of the function you're calling (eg a function returning a promise can still block, or the promise might never resolve at all).

Personally I think all solutions will come with tradeoffs; you can simply learn them well enough to be productive anyway. But you don't need language-level support for that.

> I can't disagree more. They suffer from the same stuff rust async does: they mess with the stack trace and obscure the actual guarantees of the function you're calling (eg a function returning a promise can still block, or the promise might never resolve at all).

These are inconveniences, but not show stoppers. Modern JS engines can "see through" async call stacks. Yes, bugs can result in programs that hang - but that's true in synchronous code too.

But async in rust is way worse:

- Compilation times are horrible. An async hello world in javascript starts instantly. In rust I need to compile and link to tokio or something. Takes ages.

- Rust doesn't have async iterators or async generators. (Or generators in any form.) Rust has no built in way to create or use async streams.

- Rust has 2 different ways to implement futures: the async keyword and impl Future. You need to learn both, because some code is impossible to write with the async keyword. And some code is impossible to write with impl Future. Its incredibly confusing and complicated and its difficult to learn it properly.

- Rust doesn't have a built in run loop ("executor"). So - best case - your project pulls in tokio or something, which is an entire kitchen sink and all your dependencies use that. Worst case, the libraries you want to use are written for different async executors and ??? shoot me. In JS, everything just works out of the box.

I love rust. But async rust makes async javascript seem simple and beautiful.

I stand by my assessment. You seem to simply see javascript as better because the tradeoffs are easier to internalize, in part because it can't (and doesn't try) to tackle the generalizations of async code that rust does.

> Modern JS engines can "see through" async call stacks.

I did not know that. I'll have to figure out how this works and what it looks like.

> Rust doesn't have async iterators or async generators. (Or generators in any form.) Rust has no built in way to create or use async streams.

This is not necessary. Library-level streams work just fine. Perhaps a "yield" keyword and associated compiler/runtime support would simplify this code, but this is not really a restriction for people willing to learn the libraries.

Rust has many issues, and so does its async keyword, but javascript is only obviously better if you want to use the tradeoffs javascript offers: an implicit and unchangeable async runtime that doesn't offer parallelism and relies on a jit interpreter. If you have cpu-bound code, or you want to ship a statically-compiled binary (or an embeddable library), this is not a good set of tradeoffs.

I find rust's tradeoffs to be worth the benefits—i literally do not care about compilation time and I internalized the type constraints many years ago—and I find the pain of javascript's runtime constraints to be not worth its simplicity or "beauty", although I admit I simply do not view code aesthetically. Perhaps we just prefer to tackle differently-shaped problems.

> javascript is only obviously better if you want to use the tradeoffs javascript offers: an implicit and unchangeable async runtime that doesn't offer parallelism and relies on a jit interpreter.

Yes - I certainly wouldn’t use JavaScript to compile and ship binaries to end users. But as an application developer, i think the tradeoffs it makes are pretty great. I want fast iteration (check!). I want all libraries in the ecosystem to just work and interoperate out of the box (check!). And I want to be able to just express my software using futures without worrying I’m holding them wrong.

Even in systems software I don’t know if I want to be picking my own future executor. It’s like, the string type in basically every language is part of the standard library because it makes interoperability easy. I wish future executors in rust were in std for the same reason - so we could stop arguing about it and just get back to writing code.

> And I want to be able to just express my software using futures without worrying I’m holding them wrong.

Well, there you go: you just happen to want to build stuff that javascript is good for. If you wanted to express different software you'd prefer a different language. But not everyone wants to write io-bound web services.

> I did not know that. I'll have to figure out how this works and what it looks like.

They basically stitch together a dummy async stack based on causality chain. It's not really a stack anymore since you can have a bunch of tasks interleaved on it which has to be shown somehow, but it's still nice.

It's also not JS specific. .NET has the same async model (despite also having multithreaded concurrency), and it also has similar debugger support. Not just linearized async stacks, but also the ability to diagram them etc.

https://learn.microsoft.com/en-us/visualstudio/debugger/walk...

And in profiler as well, not just the debugger. So it's entirely a tooling issue, and part of the problem is that JS ecosystem has been lagging behind on this.

Aren't streams async iterators?

generators, at least, are available on nightly.

Yeah, generators have been available on nightly for 8 years or something. They're clearly stable enough that async is built on top of the generator infrastructure within the compiler.

But I haven’t heard anything about them ever moving to stable. Here’s to another 8 years!

Project Loom makes Java in particular really nice, virtual threads can "block" without blocking the underlying OS thread. No callbacks at all, and you can even use Structured Concurrency to implement all sorts of Go- and Erlang-like patterns.

(I use it from Clojure, where it pairs great with the "thread" version of core.async (i.e. Go-style) channels.)

> but for ordinary work you're better off using gunicorn

I'd like to see some evidence for this. Other than simplicity, IMO there's very little reason to use synchronous Python for a web server these days. Streaming files, websockets, etc. are all areas where asyncio is almost a necessity (in the past you might have used twisted), to say nothing of the performance advantage for typical CRUD workloads. The developer ergonomics are also much better if you have to talk to multiple downstream services or perform actions outside of the request context. Needing to manage a thread pool for this or defer to a system like Celery is a ton more code (and infrastructure, typically).

> async i/o solutions are all single threaded

And your typical gunicorn web server is single threaded as well. Yes you can spin up more workers (processes), but you can also do that with an asgi server and get significantly higher performance per process / for the same memory footprint. You can even use uvicorn as a gunicorn worker type and continue to use it as your process supervisor, though if you're using something like Kubernetes that's not really necessary.

Maybe he meant gevent? Which is better than async io in python.

Agree to disagree. Monkey patching the stdlib is a terrible hack and having to debug non-trivial gevent apps is a nightmare (not that asyncio apps are perfect either).

Not many use cases actually need websockets. We're still building new shit in sync python and avoiding the complexity of all the other bullshit

If you watched the video closely, you'll have noticed that this design parameterizes the code by an `io` interface, which enables pluggable implementations. Correctly written code in this style can work transparently with evented or threaded runtimes.

Really? Ordinary synchronous code calls an I/O routine which returns. Asynchronous code calls an I/O routine and then gets called back. That’s a fundamental difference and you can only square it by making the synchronous code look like asynchronous code (callback gets called right away) or asynchronous code look like synchronous code (something like async python which breaks up a subroutine into multiple subroutines and has an event loop manage who calls who.

OP didn't say that code looks like ordinary sync code, only that it's possible to write code that works equally well for both sync and async. If you RTFA, it looks like this:

  var a = io.async(doWork, .{ io, "hard" });
  ...
  a.await(io);
If your `io` is async, this behaves like an async call returning a promise and then awaiting said promise. If `io` is sync then `io.async` will make the call immediately and `await` is a no-op.

I know that it depends on how much you disentangle your network code from your business logic. The question is the degree. Is it enough, or does it just dull the pain?

If you give your business logic the complete message or send it a stream, then the flow of ownership stays much cleaner. And the unit tests stay substantially easier to write and more importantly, to maintain.

I know too many devs who don't see when they bias their decisions to avoid making changes that will lead to conflict with bad unit tests and declare that our testing strategy is Just Fine. It's easier to show than to debate, but it still takes an open mind to accept the demonstration.

I haven’t actually seen it in the wild yet, just talked about in technical talks from engineers at different studios, but I’m interested in designs in which there isn’t a traditional main thread anymore.

Instead, everything is a job, and even what is considered the main thread is no longer an orchestration thread, but just another worker after some nominal set up between scaffolding enough threads, usually minus one, to all serve as lockless, work-stealing worker threads.

Conventional async programming relies too heavily on a critical main thread.

I think it’s been so successful though, that unfortunately we’ll be stuck with it for much longer than some of us will want.

It reminds me of how many years of inefficient programming we have been stuck with because cache-unfriendly traditional object-oriented programming was so successful.

This sounds like running an event loop per thread instead of 1 event loop with a backing thread pool. Or am I misunderstanding you?

It works great for small tasks but larger tasks block local events and you can get weird latency issues, that was the major tradeoff I ran into when I used it. Works great if your tasks are tiny though, not having the event loop handoff to the worker thread is a good throughput boost. But then we started having latency issues and we introduced larger tasks which would hang the local event loop from getting those events.

I think Scylladb works somewhat like this but does message passing to put certain data on certain threads so any thread can handle incoming events but it still moves the request to the pinned thread the data lives on. One thread can get overwhelmed if your data isn't well distributed.