It doesn't show up in the online videos, but there was a huge contingent of people at that fireside chat wanting a reasonable safety story for C++. The committee simply doesn't have representation from those people and don't seem to understand why it's an existential risk to the language community. The delivery timelines are so long here that anything not standardized soon isn't going to arrive for a decade or more. That's all the time in the world for Rust (or even Zig) to break down the remaining barriers.

Profiles and sanitizers just aren't sufficient.

Yeah because the committee is now people that a) really love C++, and b) don't care enough about safety to use Rust instead.

I think there are plenty of people that must use C++ due to legacy, management or library reasons and they care about safety. But those people aren't going to join language committees.

“But those people aren't going to join language committees.”

This is amusingly wrong in the worst way. In the case of c++, they were there but they left years ago when it became clear the committee didn’t see this problem as existential

And in the old days, as I keep telling, many of us (users) prefered C++ over C, exactly due to the safety and stronger typing.

Yep. You either die young or live long enough to become the villain

They could also care about safety but just not like the Rust approach.

This was asked at the aforementioned chat. Andreas Weis (MISRA) responded along the lines of "You shouldn't be writing new code in C++ if you want guarantees". Might not have the identity correct, my notes aren't in front of me.

> (...) "You shouldn't be writing new code in C++ if you want guarantees".

I'm afraid context is required to actually understand what was said. For example, it can mean anything including very obvious things like stating that the committee is still working on proposals to provide guarantees and they won't feature in a standard until the work is done and a new standard is published. Which would be stating the obvious.

Love this quote. An love the intent.

> Love this quote. An love the intent.

What intent do you think it has? That proposals are still being worked on and haven't been published in a specification yet?

D adds a lot to memory safety without needing to struggle with program redesigns that Rust requires.

These include:

1. bounds checked arrays (you can still use raw pointers instead if you like)

2. default initialization

3. static checks for escaping pointers

4. optional use of pure functions

5. transitive const and immutable qualifiers

6. ranges based on slices rather than pointer pairs

I think D failed to gain widespread traction for other reasons though:

1. The use of garbage collection. If you accept GC there are many other languages you can use. If you don't want GC the only realistic option was C++. Rust doesn't rely on GC.

IIRC GC in D is optional in some way, but the story always felt murky to me and that always felt like a way of weaseling out of that problem - like if I actually started writing D I'd find all the libraries needed GC anyway.

2. The awkward standard library schism.

3. Small community compared to C++. I think it probably just didn't offer enough to overcome this, whereas Rust did. Rust also had the help of backing from a large organisation.

I don't recall anyone ever mentioning its improved safety. I had a look on Algolia back through HN and most praise is about metaprogramming or it being generally more modern and sane than C++. I couldn't find a single mention of anything to do with safety or anything on your list.

Whereas Rust shouts safety from the rooftops. Arguably too much!

> I think D failed to gain widespread traction for other reasons though:

D's selling only proposition was providing C++11 features in a point in time between C++98 and C++11 when the C++ committee struggled to get a new standard out of the door.

Once C++11 was out, D's sales pitch was moot, and whatever wind it had in it's sail was lost and never recovered again.

It's interesting to note that Rust, in spite of all odds and also it's community, managed to put together a far more compelling sales pitch than D.

Using D does not require a garbage collector. You can use it, or not, and you can use the GC for some allocations, and use other methods for other allocations.

D has a lot of very useful features. Memory safety features are just one aspect of it.

> The awkward standard library schism.

???

Don't underestimate the backing of a large and powerful organization.

> You can use it, or not, and you can use the GC for some allocations, and use other methods for other allocations.

Yes but people wanted a language where you can't use GC.

> ???

"Which standard library should I use?" is not a question most languages have:

https://stackoverflow.com/q/693672/265521

Surely... you were aware of this problem? Maybe I misunderstood the "???".

> Don't underestimate the backing of a large and powerful organization.

Yeah it definitely matters a lot. I don't think Go would have been remotely as successful as it has been without Google.

But also we shouldn't overstate it. It definitely helped Rust to have Mozilla, but Mozilla isn't nearly as large and powerful as Google. The fact that it is an excellent language with generally fantastic ergonomics and first-of-its-kind practical memory safety without GC... probably more important. (Of course you could argue it wouldn't have got to that point without Mozilla.)

> "Which standard library should I use?" is not a question most languages have: > https://stackoverflow.com/q/693672/265521

There is no such question when using D2 either. It was only an issue with D1, which was discontinued almost 15 years ago and was irrelevant for longer.

> Yes but people wanted a language where you can't use GC.

What do you think of C and C++ coming with extensive guides for best practices and what features to not use? Even so, D comes with an @nogc attribute which won't let you use the GC. Ironically, people complain that @nogc actually does not allow use of the GC. You can also use the -betterC compiler switch to not use the GC.

Interestingly, Compile Time Function Execution works great with the GC, as one needn't have to do backflips to allocate some memory.

Mozilla is orders of magnitude larger and more powerful than the D Language Foundation.

> What do you think of C and C++ coming with extensive guides for best practices and what features to not use?

I feel that this is a disingenuous point and that you know better than this.

For example, the poster child of C++'s "don't use this feature" cliche are exceptions, and it's origins are primarily in Google's C++ stye guide.

https://google.github.io/styleguide/cppguide.html#Exceptions

If you cross-reference your claims with what Google's rationale was, you will be forced to admit your remark misrepresents the whole point.

You do not need to read too far to realize Google's point is that they have a huge stack of legacy code that is not exception-safe and not expected to be refactored, and introducing exceptions would lead their legacy code to break in ways that is not easy to remediate.

So Google had to make a call, and they decided to add the caveat that if your code is expected to be invoked by exception-free code, it should not throw exceptions.

Taken from the guide:

> Because most existing C++ code at Google is not prepared to deal with exceptions, it is comparatively difficult to adopt new code that generates exceptions.

I wonder why you left this bit out.

If this is how you try to get D to shine, then you should know why it isn't.

C++ has a lot of features which are not best practices. For example, you're not supposed to use the builtin arrays anymore, in favor of vector<>.

Google's guide is not the only one. There are the Scott Meyers series "Effective C++" with things like "declare destructors virtual in polymorphic base classes". D's destructors in polymorphic are always virtual.

This brings up another issue with C++ - conflation of polymorphic structs with non-polymorphic structs. The former should always be passed by reference, the latter maybe or maybe not. What C++ should have done is what D does - structs are for aggregation, classes are for OOP. The fundamental differences are enforced.

How does one enforce not passing a polymorphic object by value in C++? Some googling of the topic results in variations on "don't do that".

> C++ has a lot of features which are not best practices. For example, you're not supposed to use the builtin arrays anymore, in favor of vector<>.

Again, you know better than this. I don't know why you are making these claims, and it's very disappointing to see you make whole sequences of them.

There are no "built-in" arrays in C++. There's C-style arrays, which are there for compatibility with C, and then there's C++'s STL. Since C++'s inception, the recommendation is to use C++'s containers. In STL, there is std::array and std::vector. Which one you pick, it's up to your use case.

This isn't a gotcha. This is C++ 101.

> There are no "built-in" arrays in C++. There's C-style arrays,

They're built-in arrays. The C++11 n3290 specification calls them arrays in section 8.1. The use of "array" is used regularly elsewhere in the specification. They are built in to the language. There is no warning from clang compiling C++ code that these should not be used.

The trouble with C++ builtin arrays is they have no bounds checking and promptly decay to pointers at every opportunity. Despite the obsolete nature of them, people still use them. There's no switch to turn them off.

Where's the C++ guarantee that code doesn't use those builtin arrays?

"The best thing about standard libraries is that there are so many to choose from"

D has only one.

To your last point, the social/etc component of success always seems to be overlooked on HN. The world is littered with good solutions to problems that simply failed to get traction for various complex social reasons. There doesn’t have to be a technical reason something was not adopted widely. It could be fine, great even, and still just not get traction. I would bet everyone can name a piece of software they use that is not really popular at all, and everyone else uses something else that is popular, but this unpopular software really solves a problem that they have and they just like it.

Software never was a technical meritocracy.

There does not have to have been anything technically wrong with D for it to not have been widely adopted. I think HN doesn’t like that because it often means that there’s nothing obvious they can necessarily do to fix it

I've been on the edge to try out D a few times now, and always decided for No in the end.

For me, it was missing presence in the IT news that did it. D might be great, but it makes no noise.

Rust or go had a lot of articles and blog posts going deep into specific posts, appearing at a regular rate. They tended to appear on e.g this hacker news, or reddit, etc... This caused a drip feed of tutoring, giving me a slow but steady feel for these languagee. There were people tirelessly correcting misinformation. There were non stop code examples of people doing stuff with the language, proving the language usable in all kind of situations.

That's the result of having a lot of money behind it, and smart marketing.

However D still needs the ecosystem and support from platform vendors.

Unfortunately that was already lost, Java/Kotlin, Go, C# and Swift are the platform holder darlings for safe languages with GC, being improved for low level programming on each release, many with feature that you may argue that were in D first, and Rust for everything else.

Microsoft recently announced first class support for writing drivers in Rust, while I am certain that NVidia might be supportive of future Rust support on CUDA, after they get their new Python cu tiles support going across the ecosystem.

Two examples out of many others.

Language is great systems programming language, what is missing is the rest of the owl.

> With program redesigns that Rust requires

Why does Rust sometimes require program redesigns? Because these programs are flawed at some fundamental level. D lacks the most important and hardest kind of safety and that is reference safety - curiously C++ profiles also lacks any solution to that problem. A significant amount of production C++ code is riddled with UB and will never be made safe by repainting it and bounds checking.

Claiming that not being forced to fix something fundamentally broken is an advantage when talking about safety doesn't make you look like a particularly serious advocate for the topic.

> Why does Rust sometimes require program redesigns? Because these programs are flawed at some fundamental level.

I'm familiar with borrow checkers, as I wrote one for D.

Not following the rules of the borrow checker does not mean the program is flawed or incorrect. It just means the borrow checker is unable to prove it correct.

> D lacks the most important and hardest kind of safety and that is reference safety

I look at compilations of programming safety errors in shipped code now and then. Far and away the #1 bug is out-of-bounds array access. D has solved that problem.

BTW, if you use the optional GC in D, the program will be memory safe. No borrow checker needed.

> I look at compilations of programming safety errors in shipped code now and then. Far and away the #1 bug is out-of-bounds array access. D has solved that problem.

Do you have good data on that? Looking at the curl and Chromium reports they show that use-after-free is their most recurring and problematic issue.

I'm sure you are aware, but I want to mention this here for other readers. Reference safety extends to things like iterators and slices in C++.

> Not following the rules of the borrow checker does not mean the program is flawed or incorrect.

At a scale of 100k+ LoC every single measured program has been shown to be flawed because of it.

No, I haven't kept track of the reports I've seen. They all had array bounds as the #1 error encountered in shipped code.

Edit: I just googled "causes of memory safety bugs in C++". Number 1 answer: "Buffer Overflows/Out-of-Bounds Access"

"Undefined behavior in C/C++ code leads to security flaws like buffer overflows" https://www.trust-in-soft.com/resources/blogs/memory-safety-...

"Some common types of memory safety bugs include: Buffer overflows" https://www.code-intelligence.com/blog/memory_safety_corrupt...

"Memory Safety Vulnerabilities 3.1. Buffer overflow vulnerabilities We’ll start our discussion of vulnerabilities with one of the most common types of errors — buffer overflow (also called buffer overrun) vulnerabilities. Buffer overflow vulnerabilities are a particular risk in C, and since C is an especially widely used systems programming language, you might not be surprised to hear that buffer overflows are one of the most pervasive kind of implementation flaws around." https://textbook.cs161.org/memory-safety/vulnerabilities.htm...

Spatial safety can be achieved exhaustively with a single compiler switch - in clang - and a minor performance hit. Temporal safety is much harder and requires software redesign, that's why it still remains in projects that care about memory-safety and try over a long time to weed out all instances of UB, i.e. critical software like curl, Linux and Chromium.

Temporal safety is usually also much harder to reason about for humans, since it requires more context.

What flag is that? Address sanitizer has a 2x performance hit so presumably not that?

-fbounds-safety [1]

Based on their slides [2]. I was wrongly informed one does need to do some light annotation in function signatures and struct definitions for anything ABI relevant. Based on their slides:

Ptrdist and Olden benchmark suites

- LOC changes: 2.7% (0.2% used unsafe constructs) Much lower than prior approaches

- Compile-time overhead: 11%

- Code-size (text section) overhead: 9.1% (ranged -1.4% to 38%)

- Run-time overhead: 5.1% (ranged -1% to 29%)

Measurement on iOS

- 0-8% binary size increase per project

- No measurable performance or power impact on boot, app launch

- Minor overall performance impact on audio decoding/encoding (1%)

[1] https://clang.llvm.org/docs/BoundsSafety.html

[2] https://llvm.org/devmtg/2023-05/slides/TechnicalTalks-May11/...

> Why does Rust sometimes require program redesigns? Because these programs are flawed at some fundamental level.

Simply not true, and this stance is one of the reasons we have people talking about Rust sect.

> Because these programs are flawed at some fundamental level.

No. Programs that pass borrow checking are a strict subset of programs that are correct with respect to memory allocation; an infinite number of correct programs do not pass it. The borrow checker is a good idea, but it's (necessarily) incomplete.

Your claim is like saying that a program that uses any kind of dynamic memory allocation at all is fundamentally broken.

> Because these programs are flawed at some fundamental level.

That's a very strong statement. How do you support it with arguments?

We have strong evidence that anything with 100k+ LoC that uses C or C++ will have use-after-free bugs because of reference semantics. I have no data on D but I wouldn't be surprised if that's the same case there as well. You need to think about ownership and haphazardly treating it as a free-for-all is a fundamental design flaw IMO. Shared mutable state is bad for the same reasons mutable global variables are frowned upon. One needs to keep the sum total of all places and all possible paths leading to them in context for reasoning about any of them. This very very quickly becomes intractable for human minds.

I don't know that they don't care about safety. They just don't agree with the definition others have picked. I remember when managed code became a thing. I being an old c++ dev noted that memory was always managed. It was managed by me.

Like the google people who can't convince them and went on to create carbon?

> Like the google people who can't convince them and went on to create carbon?

Lots of people in mega-companies set forth to reinvent the wheel. I think we have enough track record to understand that the likes of Google don't walk over water and some of the output is rather questionable and far from the gold standard. Appeals to authority are a logical fallacy for a reason.

Sorry man, but I work since 2012 in professional security development in C/C++. Normally no one talks anymore about things like buffer overflows, use after free,... since years. Everyone uses tools to check for this, and in the end it's cheaper than using Rust. The attack vectors we talk about are logic errors and wrong usage of crypto. Things that can happen with Bash, C/C++, Rust and any other language and that you can't check automatically. Additionally to that, we talk about supply chain attacks, a thing that Rust with Cargo falls deep into.

But, based on a initiative of some Rust entusiast of one Team we tried it. Result after a half of a year was to not to use it. Learning a new language is difficult, Rust is for much people not fun to write and a newbie Rust programmer writes worse code than a senior C/C++ programmer, even if it's the same person.

Beside of people hyped by Rust, there is not much interest to replace C/C++. Currently I see no existential risk at all. On the other hand, Rust currently is overhyped, I would not bet that it's easy to find long time experienced Rust developers to maintain your code in a decade.

> Normally no one talks anymore about things like buffer overflows, use after free,... since years

Some of the biggest vulnerabilities of recent years (e.g. Heartbleed) were out-of-bounds access. The most common vulnerability sources are things that are impossible in Rust, but cannot be fully solved via C++ static checkers.

Rust has unsafe, just like Java.

On the other hand, _all_ of C++ is unsafe.