Not to disagree, but to amplify - FWIW, most of what you say was also the sales pitch for C++ over ANSI C in the early 90s vs. the "pure Java" mentality that shortly followed in the late 90s (with a megaton of Sun Microsystems marketing to re-write almost everything rather than bridge with JNI). People neglect how practical incrementalism can be.
Also, FWIW, it is very ergonomic for Nim to call C (though the reverse is made complex by GC'd types). { I believe similar can be said for other PLangs you mention, but I am not as sure. } It's barely an inconvenience. Parts of Nim's stdlib still use libc and many PLangs do that for at least system calls. You can also just convert C to Nim with the c2nim program, though usually that requires a lot of hand editing afterwards.
Maybe they should write a C++2carbon translator tool? That would speed things up for them. Maybe they already have and I just haven't heard of it? I mean the article does say "some level of source-to-source translation", but I couldn't find details/caveats poking around for a few minutes.
Nim 2 doesn’t require gc, with arc/atomicArc. The only thing you really need to be careful about is when you use ref types or custom owning types. Otherwise, manual memory management can be done in Nim pretty easily.
Hypothetically you could importcpp fns, classes, etc when compiling with nim cpp
We have zero disagreement here (actually true of all responses to my comment - an odd circumstance on HN). What you call "`ref` types" is what I meant by "GC'd types". I actually like that the Nim compiler changed from `--gc=X` to `--mm=X` a while back as the key distinction is (& has always been) "automatic vs. manual".
Elaborating on this cross-talk, any academic taxonomy says reference counting is a kind of GC. { See, the subtitle or table of contents of Jones 1996 "Garbage Collection: Algorithms for Automatic Dynamic Memory Management", for example. } Maybe you & I (or Nim's --mm?) can personally get the abbreviation "AMM" to catch on? I doubt it, but we can hope!! :) Sometimes I think I should try more. Other times I give up.
Before the late 90s, people would say "tracing GC" or "reference counting GC" and just "GC" for the general idea, but somehow early JavaVM GC's (and their imitators) were so annoying to so many that "The GC" came to usually refer, not just to the abstract idea of AMM, but to the specific, concrete separate tracing GC thread(s). It's a bit like if "hash table" had come to mean only a "separately chained linked list" variant because that's what you need for delete-in-the-middle-of-iterating like C++ STL wants and then only even the specific STL realization to boot { only luckily that didn't happen }.
The open addressed hash tables basically don't exist for a long time. The various strategies for collision handling in these tables are from the 1980s or later and if you don't have a collision strategy you can't use this as a general purpose container. I'm pretty sure I never used a hash table which didn't use separate chaining until at least the 1990s and perhaps later.
So that's maybe a bad example. In the same way I think it's fine that "Structured programming" is about the need to use structured control flow, not the much later idea of structured concurrency even though taken today you might say they both have equal claim to this word "structured".
In contrast it is weird that people decided somehow "Object oriented" means the features Java has, rather than most of what OO was actually about when it was invented. I instinctively want to blame Bjarne Stroustrup but can't think of any evidence.
I can't speak to what libraries you used, but both techniques have been broadly used in common practice since the 1950s. According to the "History" subsection in Knuth TAOCP v3 at the very end of 6.4 (whose very first edition written in 1972 covered OA with various probe/collision strategy ideas), both open addressing & separate chaining were co-invented at IBM in 1953/54 by Luhn & Amdahl. You may be confusing the Celis 1985 Robin Hood hashing work with just the open addressing part? Anyway, as you say, it may be a strained example/analogy. "OO" & "concurrency" both have a lot going on, too.
Anyway, like the "major" modes of hash collision resolution, reference counted GC has also been around concurrently (haha) with ref tracing GC since the dawn of modern computing. Unix hard-links (& other things) codify ref counting into filesystems.. Python has always had ref-counted GC, older Lisp more focused on tracing GC, etc., etc. Popularity measures are notoriously difficult.
Mostly people like to abbreviate { like having a search $PATH instead of using /bin/foo everywhere }. The whole point of abstraction is to neglect details. Neglect naturally leads to forgetting (or never learning/knowing). Ignorance leads people to cross-talk (or worse willfully misinterpret/project). Cross-talk leads to suffering. Yoda out. ;-)
EDIT: Also, speaking of abbreviation & clarity, in Nim "arc" has, at least until this writing, always stood for Automatic Reference Counting, not Atomic Ref Counting as seems the more rusty terminology and is vaguely suggested by @miguel_martin, to whom I originally replied with an "arc/atomicArc", though it seems like, in Nim 3, it may become both Automatic & Atomic, but probably not changing its abbreviation to "AARC".
None of the schemes I was aware of pre-date Celis in 1985 but it's apparent upon actually reading Celis [which I hadn't done previously] that he's only improving on an existing state of the art, so I was entirely wrong about that. That state of the art was pretty dire by my reckoning, but it's clear that it would have worked so I was entirely wrong and I apologise for being so assertive when in fact I clearly didn't know what I was talking about.
Source-to-source translation is definitely planned. We've even done some early experiments.
But we need to get the language and interop into good shape to be able to thoroughly test and evaluate the migration.
I see. So, it's just a slide-ware bullet point right now? It would be helpful to really emphasize a word like "planned" in that bullet. It might have been lifted from some future-sounding/plan-oriented list and now the material makes it seem like it's actually available.
I am trying to run Carbon in Godbolt.
Printing as in the example from Carbon's Github repository, does not work. 'Print("Test");' gives a complaint about not finding 'Print'.
That is correct. Strings and I/O both have a bunch of work to be done. Printing currently requires workarounds like https://godbolt.org/z/MP4164f7s
Is there a compiler, maybe an online one, for Carbon, or some way to compile and run Carbon code? And if not, what are the plans for that?
The nightly release of the Carbon compiler can be used via https://carbon.compiler-explorer.com/ . Note that it is definitely a work in progress, it hasn't even reached our v0.1 goals yet, but a good chunk of important functionality is working.
fwiw, many PLs find themselves needing to have C FFI if they want to support MacOS. It's not just a convenience thing.
But couldn’t one argue that’s true of most languages, they promise incremental progress toward rewriting your behemoth into miniature monoliths? I think the only one where they clearly drew the line at being able to pull in headers is .Net. You just can’t do it. Others like Golang or rust, you can point to the C headers and bam…
Honestly, while I find the syntax terse, I welcome more low level languages able to push performance.