Ya, that seems to be a misunderstanding. "Industry purposes" covers a huge range of stuff. Go is pretty good for systems programming where Java isn't really an option due to the fundamental limits imposed by garbage collection and lack of pointers. Java is pretty good for higher-level application development where occasional GC pauses are tolerable (the GC pauses are rare and fast now, but they still rule out using Java for certain purposes).

Are you sure about Go's garbage collector doesn't have pauses? AFAIK they are worse than modern Java's garbage collector [1].

I'm not sure it's even better than Java's, especially for modern ZGC (and you can choose your GC in Java). Definitely less configurable. I would say most of online comments about Java's GC are long outdated.

For example, in web servers a lot of work is request-response, so it's convenient to utilize generational GCs (so that each request data would fit into "young" generation). Similar to arenas, but without code changes. Go's GC is not generational though, so you should care about allocations.

https://codemia.io/blog/path/The-Evolution-of-Garbage-Collec...

> I would say most of online comments about Java's GC are long outdated.

They are not. Feel free to look up literally any half-decent benchmarks. If Java's on par or better than any other language of note, check the memory usage. It's standard for Java to have 5-20x the memory usage for about the same performance. The memory floor also seems to be in the hundreds of megabytes. Ridiculous.

> For example, in web servers a lot of work is request-response, so it's convenient to utilize generational GCs (so that each request data would fit into "young" generation).

No, that's a job for arenas. Generational GCs are mostly a result of Java's limitations, and not a univerally good approach.

> Go's GC is not generational though, so you should care about allocations.

You should always care about allocations. In fact, the idea that you shouldn't is a big part of Java's poor performance and design.

No-one ever claimed that Java didn't use a lot of memory. The "comments about Java's GC" used to be about pauses, mainly. Java programmers don't claim that the JVM is conservative with memory use. That said, 5-20x.... nah. Maybe for a toy 'hello world' sized program, but not in real usage

Java's GCs (plural) are hands down better.

Hell, I even had a use case where serial GC was actually the correct choice (small job runner process that needed to be extremely low memory overhead). It's nice having options, and most of those options are extremely good for the use cases they were designed for.

Ok, which one do I choose then, with what configuration? How much time do I need to spend on this research?

How do I verify that they are actually better? Is the overall performance of my program better? Because that's what I care about. I of course do include memory usage in "performance".

Do you need extra-low latency, even at very high percentile and are willing to give away a bit of throughput for it? ZGC

At almost every other case: G1 (the default, just don't add any flags).

Do you want to trade off using less memory at the price of some throughput? Decrease heap size, otherwise don't add anything.

That's it, in most cases just use the default.

GC pauses on modern JVMs are < 1ms (ZGC & Shanandoah)

Go has gc too and arguably worse one than Java

Yeah but I do like not having to give Go several flags to do something reasonable with its memory

The "reasonable" thing go does is pausing core threads doing the actual work of your program, if it feels they create too much garbage so it can keep up, severely limiting throughput.

I think this is a misunderstanding. If the program out-paces the GC because the GC guessed the trigger point wrong, something has to give.

In Go, what gives is goroutines have to use some of their time slice to assist the GC and pay down their allocations.

In Java, I believe what you used to get was called "concurrent mode failure" which was somewhat notorious, since it would just stop the world to complete the mark phase. I don't know how this has changed. Poking around a little bit it seems like something similar in ZGC is called "allocation failure"?

The GC assist approach adopted by Go was inspired by real-time GC techniques from the literature and in practice it works nicely. It's not perfect of course, but it's worked just fine for lots of programs. From a purely philosophical point of view, I think it results in a more graceful degradation under unexpectedly high allocation pressure than stopping the world, but what happens in practice is much more situational and relies on good heuristics in the implementation.

A lot of the answer is that if you can do more work while generating less garbage (lower allocation rate) this problem basically solves itself. Basically every "high performance GC language" other than Java allows for "value type"/"struct"s which allow for way lower allocation rate, which puts a lot less pressure on the GC.

How much less allocation rate? Value types are an important thing and fortunately they are coming to Java as well. But they don't decrease allocation rates nearly enough in every kind of software. They may be a necessity in games/certain computations/low-lat trading, but for a typical web server they don't matter all that much - people are using identity having objects in value typed languages the same way here. Especially that with thread local allocation buffers in Java single-use object allocations are not particularly expensive to begin with - live objects are evacuated and then the whole buffer is reset.

So unless you claim that there is no software in Go/C# where the GC is the bottleneck, no, the problem absolutely doesn't solve itself.

And yet Java outruns pretty much all of them, because it doesn't actually allocate everything on the heap all of the time. And you've been able to declare and work with larger structures in raw memory for ... 20 years? You mostly don't need to, but sometimes you want to win benchmark wars.

And of course it's getting value types now, so it'll win there too. As well as vectors.

Post benchmarks. No, ones where you use 20x the memory of Rust to do the same job 1% faster don't count.

You don’t have to do that with Java either.

That's a very shallow argument.

If it were shallow, it'd be easy for them to fix

Backwards compatibility, every vim and emacs and bash enthusiast should know about it.

It's easy for the USER to fix, since there are flags available. In the day of LLMs it's also easy to find out about those flags and what they do. And if it's so important, testing shouldn't be supremely hard, either.

Do you mean backwards compatibility for things that rely on the default settings? The defaults have already changed across Java versions and also depend on the system.

Normally when you run a non-Java binary, it uses very little memory to start, doesn't have a limit, and returns memory to the system when it frees things. Supposedly you can set JVM flags to do all that, but performance probably suffers, otherwise they would've just done that. So in practice users are always setting the flags carefully.

Go's GC is orders of magnitude behind Java's.