What you're saying may (sometimes) be true, but that's not why Java's performance is hard to beat, especially as programs evolve (I was programming in C and C++ since before Java even existed).

In a low-level language, you pay a higher performance cost for a more general (abstract) construct. E.g. static vs. dynamic dispatch, or the Box/Rc/Arc progression in Rust. If a certain subroutine or object requires the more general access even once, you pay the higher price almost everywhere. In Java, the situation is opposite: You use a more general construct, and the compiler picks an appropriate implementation per use site. E.g. dispatch is always logically dynamic, but if at a specific use site the compiler sees that the target is known, then the call will be inlined (C++ compilers sometimes do that, too, but not nearly to the same extent; that's because a JIT can perform speculative optimisations without proving they're correct); if a specific `new Integer...` doesn't escape, it will be "allocated" in a register, and if it does escape it will be allocated on the heap.

The problem with Java's approach is that optimisations aren't guaranteed, and sometimes an optimisation can be missed. But on average they work really well.

The problem with a low-level language is that over time, as the program evolves and features (and maintainers) are added, things tend to go in one direction: more generality. So over time, the low-level program's performance degrades and/or you have to rethink and rearchitect to get good performance back.

As to memory locality, there's no issue with Java's approach, only with a missing feature of flattening objects into arrays. This feature is now being added (also in a general way: a class can declare that it doesn't depend on identity, and the compiler then transparently decides when to flatten it and when to box it).

Anyway, this is why it's hard, even for experts to match Java's performance without a significantly higher effort that isn't a one-time thing, but carries (in fact, gets worse) over the software's lifetime. It can be manageable and maybe worthwhile for smaller programs, but the cost, performance, or both suffer more and more with bigger programs as time goes on.

From my perspective, the problem with Java's approach is memory, not computation. For example, low-level languages treat types as convenient lies you can choose to ignore at your own peril. If it's more convenient to treat your objects as arrays of bytes/integers (maybe to make certain forms of serialization faster), or the other way around (maybe for direct access to data in a memory-mapped file), you can choose to do that. Java tends to make solutions like that harder.

Java's performance may be hard to beat in the same task. But with low-level languages, you can often beat it by doing something else due to having fewer constraints and more control over the environment.

> or the other way around (maybe for direct access to data in a memory-mapped file), you can choose to do that. Java tends to make solutions like that harder.

Not so much anymore, thanks to the new FFM API (https://openjdk.org/jeps/454). The verbose code you see is all compiler intrinsics, and thanks to Java's aggressive inlining, intrinsics can be wrapped and encapsulated in a clean API (i.e. if you use an intrinsic in method bar which you call from method foo, usually it's as if you've used the intrinsic directly in foo, even though the call to bar is virtual). So you can efficiently and safely map a data interface type to chunks of memory in a memory-mapped file.

> But with low-level languages, you can often beat it by doing something else due to having fewer constraints and more control over the environment.

You can, but it's never free, rarely cheap (and the costs are paid throughout the software's lifetime), and the gains aren't all that large (on average). The question isn't "is it possible to write something faster" but "can you get sufficient gains at a justifiable costs", and that's already hard and getting harder and harder.