Go has tricks that you can't replicate elsewhere, things like infinitely growable stacks, that's only possible thanks to the garbage collector. But I did enjoy working on this, I'm continually impressed with Zig for how nice high-level looking APIs are possible in such a low-level language.

Also, it is about time to let go with GC-phobia.

https://www.withsecure.com/en/solutions/innovative-security-...

https://www.ptc.com/en/products/developer-tools/perc

Note the

> This video illustrates the use case of Perc within the Aegis Combat System, a digital command and control system capable of identifying and tracking incoming threats and providing the war fighter with a solution to address threats. Aegis, developed by Lockheed Martin, is critical to the operation of the DDG-51, and Lockheed Martin has selected Perc as the operating platform for Aegis to address real-time requirements and response times.

Not all GCs are born alike.

> Not all GCs are born alike.

True. However in the bounded-time GC space few projects share the same definitions of low-latency or real-time. So you have to find a language that meets all of your other desiderata and provides a GC that meets your timing requirements. Perc looks interesting, Metronome made similar promises about sub-ms latency. But I'd have to get over my JVM runtime phobia.

I consider one where human lifes depend on it, for good or worse depending on the side, real time enough.

Human lives often depend on processes that can afford to be quite slow. You can have a real time system requiring only sub-hour latency; the "realness" of a real-time deadline is quite distinct from the duration of that deadline.

I don’t have an issue with garbage collectors. Most code I write is GC’d.

The thing that actually convinced me to learn Rust was for something that I wanted to use less memory; my initial Clojure version, compiled with GraalVM, hovered around 100 megs. When I rewrote it in Rust, it hovered around 500kb.

It’s not completely apples to apples, and the laptop running this code has a ton of RAM anyway, but it’s still kind of awesome to see a 200x reduction in memory without significantly more complicated code.

A lot of the stuff I have to do in Rust for GC-less memory safety ends up being stuff I would have to do anyway in a GC’d language, e.g. making sure that one thread owns the memory after it has been transferred over a channel.

That GC introduces latencies of ~1000µs. The article is about eliminating ~10µs context switching latencies. Completely different performance class. The "GC-phobia" is warranted if you care about software performance, throughput, and scalability.

DoD uses languages like Java in applications where raw throughput and low-latency is not critical to success. A lot of what AEGIS does is not particularly performance sensitive.

GC is fine, what scaries me is using j*va in Aegis..

The OutOfMemoryError will happen after rocket hits the target.

Real-time GCs can only guarantee a certain number of deallocations per second. Even with a very well-designed GC, there's no free lunch. A system which manages its memory explicitly will not need to risk overloading its GC.

I think you have that backwards; they can only guarantee a certain number of allocations per second (once the application hits steady-state the two are the same, but there are times when it matters)

Pre-1.0 Rust used to have infinitely growing stacks, but they abandoned it due to (among other things) performance reasons (IIRC the stacks were not collected with Rust's GC[1], but rather on return; the deepest function calls may happen in tight loops, and if you are allocating and freeing the stack in a tight loop, oops!)

1: Yes, pre-1.0 Rust had a garbage collector.

Rust still has garbage collection if you use Arc and Rc. Not a garbage collector but this type of garbage collection.

I'm going to veer into no-true-scottsman territory for a bit and claim that those don't count since they cannot collect cycles (if I'm wrong and they implement e.g. trial-deletion, let me know). This isn't just academic, since cyclic data-structures are an important place where the borrow-checker can't help you, so a GC would be useful.

You mean Drop, which is entirely predictable and controlled by the user?

You mean GO segmented stacks? You can literally them in C and C++ with GCC and glibc. It was implemented to support gccgo, but it works for other languages as well.

It is an ABI change though, so you need to recompile the whole stack (there might be the ability for segmented code to call non segmented code, but I don't remember the extent of the support) and it is probably half deprecated now. But it works and it doesn't need GC.

No, Go abandoned segmented stacks a long time ago. It causes unpredictable performance, because you can hit alloc/free cycle somewhere deep in code. What they do now is that when they hit stack guard, they allocate a new stack (2x size), copy the data, update pointers. Shrinking happens during GC.

I think by now we can consider gccgo will enventually join gcj.

The Fortran, Modula-2 and ALGOL 68 frontends are getting much more development work than gccgo, stuck in pre-generics Go, version 1.18 from 2022 and no one is working on it other than minor bug fixes.