Hey Ron, I’ve got deep respect for what you do and appreciate what you’re sharing, that’s definitely good to know. And I understand that many people take any benchmark as a validation for their beliefs. There are so many parameters that are glossed over at best. More interesting to me is the total cost of bringing that performance to production. If it’s some gibberish that takes a team of five a month to formulate and then costs extra CPU and RAM to execute, and then becomes another Perlesque incantation that no one can maintain, it’s not really a “typical” thing worth consideration, except where it’s necessary, scoped to a dedicated library, and the budget permits.

I don’t touch Quarkus anymore for a variety of issues. Yes, sometimes it’s Quarkus ahead, sometimes it’s Vert.x, from what I remember it’s usually bare Vert.x. It boils down to the benchmark iteration and runtime environment. In a gRPC benchmark, Akka took the crown in a multicore scenario - at a cost of two orders of magnitude more RAM and more CPU. Those are plausible baselines for a trivial payload.

By Netty avoiding the JVM I referred mostly to its off-heap memory management, not only the JDK APIs you guys deprecated.

I’m deeply ingrained in the Java world, but your internal benchmarks rarely translate well to my day-to-day observations. So I’m quite often a bit perplexed when I read your comments here and elsewhere or watch your talks. Without pretending I comprehended the JVM on a level comparable to yours, in my typical scenarios, I do quite often manage to get close to the throughput of my Rust and C++ implementations, albeit at a much higher CPU and memory cost. Latency & throughput at once is a different story though. I genuinely hope that one day Java will become a platform for more performance-oriented workloads, with less nondeterminism. I really appreciate your efforts toward introducing more consistency into JDK.