I am actually much more pessimistic about Profiles than Simone.
Regardless of the technology the big thing Rust has that C++ does not is safety culture, and that's dominant here. You could also see at the 2024 "Fireside chat" at CppCon that this isn't likely to change any time soon.
The profiles technology isn't very good. But that's insignificant next to the culture problem, once you decided to make the fifteen minute bagpipe dirge your lead single it doesn't really matter whether you use the colored vinyl.
It doesn't show up in the online videos, but there was a huge contingent of people at that fireside chat wanting a reasonable safety story for C++. The committee simply doesn't have representation from those people and don't seem to understand why it's an existential risk to the language community. The delivery timelines are so long here that anything not standardized soon isn't going to arrive for a decade or more. That's all the time in the world for Rust (or even Zig) to break down the remaining barriers.
Profiles and sanitizers just aren't sufficient.
Yeah because the committee is now people that a) really love C++, and b) don't care enough about safety to use Rust instead.
I think there are plenty of people that must use C++ due to legacy, management or library reasons and they care about safety. But those people aren't going to join language committees.
“But those people aren't going to join language committees.”
This is amusingly wrong in the worst way. In the case of c++, they were there but they left years ago when it became clear the committee didn’t see this problem as existential
And in the old days, as I keep telling, many of us (users) prefered C++ over C, exactly due to the safety and stronger typing.
Yep. You either die young or live long enough to become the villain
They could also care about safety but just not like the Rust approach.
This was asked at the aforementioned chat. Andreas Weis (MISRA) responded along the lines of "You shouldn't be writing new code in C++ if you want guarantees". Might not have the identity correct, my notes aren't in front of me.
> (...) "You shouldn't be writing new code in C++ if you want guarantees".
I'm afraid context is required to actually understand what was said. For example, it can mean anything including very obvious things like stating that the committee is still working on proposals to provide guarantees and they won't feature in a standard until the work is done and a new standard is published. Which would be stating the obvious.
Love this quote. An love the intent.
> Love this quote. An love the intent.
What intent do you think it has? That proposals are still being worked on and haven't been published in a specification yet?
D adds a lot to memory safety without needing to struggle with program redesigns that Rust requires.
These include:
1. bounds checked arrays (you can still use raw pointers instead if you like)
2. default initialization
3. static checks for escaping pointers
4. optional use of pure functions
5. transitive const and immutable qualifiers
6. ranges based on slices rather than pointer pairs
I think D failed to gain widespread traction for other reasons though:
1. The use of garbage collection. If you accept GC there are many other languages you can use. If you don't want GC the only realistic option was C++. Rust doesn't rely on GC.
IIRC GC in D is optional in some way, but the story always felt murky to me and that always felt like a way of weaseling out of that problem - like if I actually started writing D I'd find all the libraries needed GC anyway.
2. The awkward standard library schism.
3. Small community compared to C++. I think it probably just didn't offer enough to overcome this, whereas Rust did. Rust also had the help of backing from a large organisation.
I don't recall anyone ever mentioning its improved safety. I had a look on Algolia back through HN and most praise is about metaprogramming or it being generally more modern and sane than C++. I couldn't find a single mention of anything to do with safety or anything on your list.
Whereas Rust shouts safety from the rooftops. Arguably too much!
> I think D failed to gain widespread traction for other reasons though:
D's selling only proposition was providing C++11 features in a point in time between C++98 and C++11 when the C++ committee struggled to get a new standard out of the door.
Once C++11 was out, D's sales pitch was moot, and whatever wind it had in it's sail was lost and never recovered again.
It's interesting to note that Rust, in spite of all odds and also it's community, managed to put together a far more compelling sales pitch than D.
Using D does not require a garbage collector. You can use it, or not, and you can use the GC for some allocations, and use other methods for other allocations.
D has a lot of very useful features. Memory safety features are just one aspect of it.
> The awkward standard library schism.
???
Don't underestimate the backing of a large and powerful organization.
> You can use it, or not, and you can use the GC for some allocations, and use other methods for other allocations.
Yes but people wanted a language where you can't use GC.
> ???
"Which standard library should I use?" is not a question most languages have:
https://stackoverflow.com/q/693672/265521
Surely... you were aware of this problem? Maybe I misunderstood the "???".
> Don't underestimate the backing of a large and powerful organization.
Yeah it definitely matters a lot. I don't think Go would have been remotely as successful as it has been without Google.
But also we shouldn't overstate it. It definitely helped Rust to have Mozilla, but Mozilla isn't nearly as large and powerful as Google. The fact that it is an excellent language with generally fantastic ergonomics and first-of-its-kind practical memory safety without GC... probably more important. (Of course you could argue it wouldn't have got to that point without Mozilla.)
> "Which standard library should I use?" is not a question most languages have: > https://stackoverflow.com/q/693672/265521
There is no such question when using D2 either. It was only an issue with D1, which was discontinued almost 15 years ago and was irrelevant for longer.
> Yes but people wanted a language where you can't use GC.
What do you think of C and C++ coming with extensive guides for best practices and what features to not use? Even so, D comes with an @nogc attribute which won't let you use the GC. Ironically, people complain that @nogc actually does not allow use of the GC. You can also use the -betterC compiler switch to not use the GC.
Interestingly, Compile Time Function Execution works great with the GC, as one needn't have to do backflips to allocate some memory.
Mozilla is orders of magnitude larger and more powerful than the D Language Foundation.
> What do you think of C and C++ coming with extensive guides for best practices and what features to not use?
I feel that this is a disingenuous point and that you know better than this.
For example, the poster child of C++'s "don't use this feature" cliche are exceptions, and it's origins are primarily in Google's C++ stye guide.
https://google.github.io/styleguide/cppguide.html#Exceptions
If you cross-reference your claims with what Google's rationale was, you will be forced to admit your remark misrepresents the whole point.
You do not need to read too far to realize Google's point is that they have a huge stack of legacy code that is not exception-safe and not expected to be refactored, and introducing exceptions would lead their legacy code to break in ways that is not easy to remediate.
So Google had to make a call, and they decided to add the caveat that if your code is expected to be invoked by exception-free code, it should not throw exceptions.
Taken from the guide:
> Because most existing C++ code at Google is not prepared to deal with exceptions, it is comparatively difficult to adopt new code that generates exceptions.
I wonder why you left this bit out.
If this is how you try to get D to shine, then you should know why it isn't.
C++ has a lot of features which are not best practices. For example, you're not supposed to use the builtin arrays anymore, in favor of vector<>.
Google's guide is not the only one. There are the Scott Meyers series "Effective C++" with things like "declare destructors virtual in polymorphic base classes". D's destructors in polymorphic are always virtual.
This brings up another issue with C++ - conflation of polymorphic structs with non-polymorphic structs. The former should always be passed by reference, the latter maybe or maybe not. What C++ should have done is what D does - structs are for aggregation, classes are for OOP. The fundamental differences are enforced.
How does one enforce not passing a polymorphic object by value in C++? Some googling of the topic results in variations on "don't do that".
> C++ has a lot of features which are not best practices. For example, you're not supposed to use the builtin arrays anymore, in favor of vector<>.
Again, you know better than this. I don't know why you are making these claims, and it's very disappointing to see you make whole sequences of them.
There are no "built-in" arrays in C++. There's C-style arrays, which are there for compatibility with C, and then there's C++'s STL. Since C++'s inception, the recommendation is to use C++'s containers. In STL, there is std::array and std::vector. Which one you pick, it's up to your use case.
This isn't a gotcha. This is C++ 101.
> There are no "built-in" arrays in C++. There's C-style arrays,
They're built-in arrays. The C++11 n3290 specification calls them arrays in section 8.1. The use of "array" is used regularly elsewhere in the specification. They are built in to the language. There is no warning from clang compiling C++ code that these should not be used.
The trouble with C++ builtin arrays is they have no bounds checking and promptly decay to pointers at every opportunity. Despite the obsolete nature of them, people still use them. There's no switch to turn them off.
Where's the C++ guarantee that code doesn't use those builtin arrays?
"The best thing about standard libraries is that there are so many to choose from"
D has only one.
To your last point, the social/etc component of success always seems to be overlooked on HN. The world is littered with good solutions to problems that simply failed to get traction for various complex social reasons. There doesn’t have to be a technical reason something was not adopted widely. It could be fine, great even, and still just not get traction. I would bet everyone can name a piece of software they use that is not really popular at all, and everyone else uses something else that is popular, but this unpopular software really solves a problem that they have and they just like it.
Software never was a technical meritocracy.
There does not have to have been anything technically wrong with D for it to not have been widely adopted. I think HN doesn’t like that because it often means that there’s nothing obvious they can necessarily do to fix it
I've been on the edge to try out D a few times now, and always decided for No in the end.
For me, it was missing presence in the IT news that did it. D might be great, but it makes no noise.
Rust or go had a lot of articles and blog posts going deep into specific posts, appearing at a regular rate. They tended to appear on e.g this hacker news, or reddit, etc... This caused a drip feed of tutoring, giving me a slow but steady feel for these languagee. There were people tirelessly correcting misinformation. There were non stop code examples of people doing stuff with the language, proving the language usable in all kind of situations.
That's the result of having a lot of money behind it, and smart marketing.
However D still needs the ecosystem and support from platform vendors.
Unfortunately that was already lost, Java/Kotlin, Go, C# and Swift are the platform holder darlings for safe languages with GC, being improved for low level programming on each release, many with feature that you may argue that were in D first, and Rust for everything else.
Microsoft recently announced first class support for writing drivers in Rust, while I am certain that NVidia might be supportive of future Rust support on CUDA, after they get their new Python cu tiles support going across the ecosystem.
Two examples out of many others.
Language is great systems programming language, what is missing is the rest of the owl.
> With program redesigns that Rust requires
Why does Rust sometimes require program redesigns? Because these programs are flawed at some fundamental level. D lacks the most important and hardest kind of safety and that is reference safety - curiously C++ profiles also lacks any solution to that problem. A significant amount of production C++ code is riddled with UB and will never be made safe by repainting it and bounds checking.
Claiming that not being forced to fix something fundamentally broken is an advantage when talking about safety doesn't make you look like a particularly serious advocate for the topic.
> Why does Rust sometimes require program redesigns? Because these programs are flawed at some fundamental level.
I'm familiar with borrow checkers, as I wrote one for D.
Not following the rules of the borrow checker does not mean the program is flawed or incorrect. It just means the borrow checker is unable to prove it correct.
> D lacks the most important and hardest kind of safety and that is reference safety
I look at compilations of programming safety errors in shipped code now and then. Far and away the #1 bug is out-of-bounds array access. D has solved that problem.
BTW, if you use the optional GC in D, the program will be memory safe. No borrow checker needed.
> I look at compilations of programming safety errors in shipped code now and then. Far and away the #1 bug is out-of-bounds array access. D has solved that problem.
Do you have good data on that? Looking at the curl and Chromium reports they show that use-after-free is their most recurring and problematic issue.
I'm sure you are aware, but I want to mention this here for other readers. Reference safety extends to things like iterators and slices in C++.
> Not following the rules of the borrow checker does not mean the program is flawed or incorrect.
At a scale of 100k+ LoC every single measured program has been shown to be flawed because of it.
No, I haven't kept track of the reports I've seen. They all had array bounds as the #1 error encountered in shipped code.
Edit: I just googled "causes of memory safety bugs in C++". Number 1 answer: "Buffer Overflows/Out-of-Bounds Access"
"Undefined behavior in C/C++ code leads to security flaws like buffer overflows" https://www.trust-in-soft.com/resources/blogs/memory-safety-...
"Some common types of memory safety bugs include: Buffer overflows" https://www.code-intelligence.com/blog/memory_safety_corrupt...
"Memory Safety Vulnerabilities 3.1. Buffer overflow vulnerabilities We’ll start our discussion of vulnerabilities with one of the most common types of errors — buffer overflow (also called buffer overrun) vulnerabilities. Buffer overflow vulnerabilities are a particular risk in C, and since C is an especially widely used systems programming language, you might not be surprised to hear that buffer overflows are one of the most pervasive kind of implementation flaws around." https://textbook.cs161.org/memory-safety/vulnerabilities.htm...
Spatial safety can be achieved exhaustively with a single compiler switch - in clang - and a minor performance hit. Temporal safety is much harder and requires software redesign, that's why it still remains in projects that care about memory-safety and try over a long time to weed out all instances of UB, i.e. critical software like curl, Linux and Chromium.
Temporal safety is usually also much harder to reason about for humans, since it requires more context.
What flag is that? Address sanitizer has a 2x performance hit so presumably not that?
-fbounds-safety [1]
Based on their slides [2]. I was wrongly informed one does need to do some light annotation in function signatures and struct definitions for anything ABI relevant. Based on their slides:
Ptrdist and Olden benchmark suites
- LOC changes: 2.7% (0.2% used unsafe constructs) Much lower than prior approaches
- Compile-time overhead: 11%
- Code-size (text section) overhead: 9.1% (ranged -1.4% to 38%)
- Run-time overhead: 5.1% (ranged -1% to 29%)
Measurement on iOS
- 0-8% binary size increase per project
- No measurable performance or power impact on boot, app launch
- Minor overall performance impact on audio decoding/encoding (1%)
[1] https://clang.llvm.org/docs/BoundsSafety.html
[2] https://llvm.org/devmtg/2023-05/slides/TechnicalTalks-May11/...
> Why does Rust sometimes require program redesigns? Because these programs are flawed at some fundamental level.
Simply not true, and this stance is one of the reasons we have people talking about Rust sect.
> Because these programs are flawed at some fundamental level.
No. Programs that pass borrow checking are a strict subset of programs that are correct with respect to memory allocation; an infinite number of correct programs do not pass it. The borrow checker is a good idea, but it's (necessarily) incomplete.
Your claim is like saying that a program that uses any kind of dynamic memory allocation at all is fundamentally broken.
> Because these programs are flawed at some fundamental level.
That's a very strong statement. How do you support it with arguments?
We have strong evidence that anything with 100k+ LoC that uses C or C++ will have use-after-free bugs because of reference semantics. I have no data on D but I wouldn't be surprised if that's the same case there as well. You need to think about ownership and haphazardly treating it as a free-for-all is a fundamental design flaw IMO. Shared mutable state is bad for the same reasons mutable global variables are frowned upon. One needs to keep the sum total of all places and all possible paths leading to them in context for reasoning about any of them. This very very quickly becomes intractable for human minds.
I don't know that they don't care about safety. They just don't agree with the definition others have picked. I remember when managed code became a thing. I being an old c++ dev noted that memory was always managed. It was managed by me.
Like the google people who can't convince them and went on to create carbon?
> Like the google people who can't convince them and went on to create carbon?
Lots of people in mega-companies set forth to reinvent the wheel. I think we have enough track record to understand that the likes of Google don't walk over water and some of the output is rather questionable and far from the gold standard. Appeals to authority are a logical fallacy for a reason.
Sorry man, but I work since 2012 in professional security development in C/C++. Normally no one talks anymore about things like buffer overflows, use after free,... since years. Everyone uses tools to check for this, and in the end it's cheaper than using Rust. The attack vectors we talk about are logic errors and wrong usage of crypto. Things that can happen with Bash, C/C++, Rust and any other language and that you can't check automatically. Additionally to that, we talk about supply chain attacks, a thing that Rust with Cargo falls deep into.
But, based on a initiative of some Rust entusiast of one Team we tried it. Result after a half of a year was to not to use it. Learning a new language is difficult, Rust is for much people not fun to write and a newbie Rust programmer writes worse code than a senior C/C++ programmer, even if it's the same person.
Beside of people hyped by Rust, there is not much interest to replace C/C++. Currently I see no existential risk at all. On the other hand, Rust currently is overhyped, I would not bet that it's easy to find long time experienced Rust developers to maintain your code in a decade.
> Normally no one talks anymore about things like buffer overflows, use after free,... since years
Some of the biggest vulnerabilities of recent years (e.g. Heartbleed) were out-of-bounds access. The most common vulnerability sources are things that are impossible in Rust, but cannot be fully solved via C++ static checkers.
Rust has unsafe, just like Java.
On the other hand, _all_ of C++ is unsafe.
I came to the same conclusion in a talk I gave to the Munich C++ Meetup [1]. There is a prevalent culture of expecting users to not make mistakes. Library constructs that could be significantly safer-to-use are kept easy-to-use incorrectly, usually with the argument of performance. The irony is that if you look closer, the performance optimization that was done is removing the seatbelts from a small hatchback to safe on weight.
[1] https://youtu.be/rZ7QQWKP8Rk or text form https://github.com/Voultapher/Presentations/blob/main/safety...
A year or so ago I read that there was a design decision railroaded through the committee about what kind of safety approach could be looked at. Its wording effectively prevented Safe C++. I was not at this meeting so I’m going on what others say:
https://www.reddit.com/r/cpp/comments/1hppdzc/comment/m4jjo4...
I’m a big fan of Safe C++ and believe its approach — learning from another language, incremental opt-in (just like all good refactorings, work on code and improve it piece by piece) — would have been the path that solved some genuine problems. Profiles seem a hodgepodge. And — to share personal worries about what I read into what comments like the above imply, this is not a statement — I worry deeply about the relationship between who proposes what, and who has pricklier personalities or less connections, with what approach was accepted.
I wish Safe C++ would continue as a hard fork of the language.
As much as the alternatives (profiles) don't solve the issue, Safe C++ (Circle) does have substantial issues as well. You need a separate and largely incompatible standard library, including containers. Generic code (templates) is largely left unsolved on a conceptual level so far. At this point incrementally replacing parts of your code with Rust - which has a mature ecosystem and tooling, remember if you want provable safety none of your dependencies used in Safe C++ code are allowed to be unsafe - is going to be less hassle. Firefox showed you can do it, and even Microsoft is choosing that path for the Windows kernel. Waiting for Safe C++ to be usable, seems like wanting to wait for a worse Rust. Interop is hardly going to be much better than bindings generated by cxx.
I get that, though I think different containers can be refactored well.
Replacing parts with an entirely different language is a whole other level compared to same language, modified standard library.
And I think a big reason Rust is winning is that it works, today. C++ doesn't. There is no other path other than to migrate to another language.
> There is a prevalent culture of expecting users to not make mistakes.
I think the older of us C/C++ programmers come from no-safety languages like assembly language. That doesn't mean that all of us are "macho programmers" (as I was called here once). C's weak typing and compilers emitting warnings give a false sense of security which is tricky to deal with.
The statement you make is not entirely correct. The more correct statement is that there is a prevalent culture of expecting users to find strategies to avoid mistakes. We are engineers. We do what we need with what we have, and we did what we had to with what we had.
When you program with totally unsafe languages, you develop more strategies than just relying on a type checker and borrow checker: RAII, "crash early", TDD, completion-compatible naming conventions, even syntax highlighting (coloring octal numbers differently)...
BUT. the cultural characteristics of the programmers are only one-quarter of the story. The bigger part is about company culture, and more specifically the availability of programmers. You won't promote safer languages and safer practice by convincing programmers that it has zero impact on performance. It's the companies that you need to convince that the safe alternatives are as productive [1] as the less safe alternatives.
[1] https://xkcd.com/303/
I get the feeling you didn't watch my talk. The example in question is sorting. Say for example your comparison function does not implement a strict weak ordering, which can easily happen if you use <= instead of <, in C++ you routinely get out-of-bounds read and write, in Rust you get some unspecified element order.
In what world is the first preferable to the latter?
This behavior is purely an implementation choice. Even the C people glibc and LLVM libc consider this to be undesirable and are willing to spend 2-3% overhead on making sure you don't get that behavior.
No, this is not "expecting users to find strategies to avoid mistakes".
> Even the C people glibc and LLVM libc consider this to be undesirable and are willing to spend 2-3% overhead on making sure you don't get that behavior.
libc++ actually had to roll back a std::sort improvement because it broke too much code that was relying on bad comparators. From the RFC for adding comparator checks to debug libc++ [0]:
> Not so long ago we proposed and changed std::sort algorithm [1]. However, it was rolled back in 16.0.1 because of failures within the broken comparators. That was even true for the previous implementation, however, the new one exposed the problems more often.
[0]: https://discourse.llvm.org/t/rfc-strict-weak-ordering-checks...
[1]: https://reviews.llvm.org/D122780 (not the original link, but I think this is the review for the changeset that was rolled back)
It looks more like an implementation error to me, and actually it looks more like a design mistake than an implementation choice because there are arguments in favor of using an abstract functor class for the comparison function (you'll need a closure sooner or later), which would have given the chance to warn the user about this particular issue in the docs - at least it would have been more visible and clearer than it currently is [1].
Because until vibe coding becomes a culture, programmers are at least expected to "RTFM". But that's also a requirement which is becoming harder to meet by the year, because - you almost said it the first few minutes of your talk - "we needed to merge it ASAP".
This mistake seems to have been somewhat fixed in C++20 [2]. "Too little too late", yes, probably.
[1] https://en.cppreference.com/w/cpp/algorithm/sort.html
[2] ibidem, tacit use of std::less.
> When you program with totally unsafe languages, you develop more strategies than just relying on a type checker and borrow checker: RAII, "crash early", TDD, completion-compatible naming conventions, even syntax highlighting (coloring octal numbers differently)...
Having written a fair bit of rust and C, I don't consider the tools for safety in C to be good enough.
In C, its so easy for small mistakes to turn into CVEs. ASAN and friends help. But they're a long way from perfect. Testing helps. But in C, there's usually a fair bit of time that passes between when I make a mistake and when I discover the bug through testing. Its also so easy for bugs to hide in C in the cracks of UB.
One of my clearest experiences with C and rust was a rope library I wrote several years ago. Ropes are "fancy strings". They're strings, but they support O(log n) insert & delete, at arbitrary positions. I wrote my library in pure C, implemented on top of a skip list. The code is very subtle - like, there's a lot of very careful logic. A single incorrect line of code will often cause silent data corruption or memory errors that don't show up until much later.
It took about as long to properly test & debug the library as it took to write it in the first place. Debugging it was exhausting - there were a myriad of obscure edge cases that I needed fuzzing to track down. When the fuzzer found problems, going from a failing fuzzer trace to a code fix was a big job.
Before I started, I had a bunch of optimisations in mind that I wanted to add to the library. But it was so exhausting getting it working at all that I never got around to most of them. Eg I wanted to make each node in the skip list into a gap buffer to reduce memcopies. But implementing that would have required significant code changes - which in turn would have meant a new round of memory bugs and debugging. I never brought myself to do it.
At some point I rewrote the library in rust, with liberal use of raw pointers. I made just as many mistakes in the implementation - though the compiler caught a lot of them. The first time I ran it it segfaulted. And I thought "here we go again". But despite using raw pointers, there were only 2 unsafe functions in the whole program. A segfault in rust can only happen from unsafe code. So I took a read of that code - and lo and behold, there was my bug, plain as day. Time to fix: 2 minutes. The library never segfaulted again in all my testing. The first time I benchmarked it it was ~10% faster than the C version. I still have no idea why.
It was so much easier to write that a little while later, I put the gap buffer optimisation in. Now the rust library is 2-3x faster than C. In this case, memory safety made my program easier to write. And that resulted in better performance.
If anyone is curious, C / rust code is here:
https://github.com/josephg/librope
https://github.com/josephg/jumprope-rs
> BUT. the cultural characteristics of the programmers are only one-quarter of the story. The bigger part is about company culture, and more specifically the availability of programmers.
Yeah absolutely. I think this is the biggest downside of rust. Rust is really hard - and painful - to learn. It front loads all the pain. In C, you suffer while debugging. In rust, all that suffering happens while learning the language in the first place. I spent months fighting the borrow checker. And its very demotivating not being able to compile your program at all. Once you understand it, it makes sense. But I think there will always be a limited pool of programmers willing to struggle through.
Even chatgpt is bad at rust. It makes all sorts of classic beginner mistakes with lifetimes, and the resulting code often won't compile. Even after pointing out the problem, chatgpt is often unable to correct lifetime bugs.
C and C++ are fundamentally memory-unsafe languages. That doesn't make them bad languages, but it is a reality that you have to face when you work with them. And one of the things we've learned is that building safe abstractions, while not a complete solution, does quite a long way.
And then CISA suggested that "maybe we should stop using memory-unsafe languages." And this has some of the C++ committee utterly terrified; they need something that lets them tell the government that C++, today, is memory-safe. That thing is C++ profiles. It's not about actually making C++ memory-safe, it's about being able to check the box that C++ is memory-safe, and this is so important it needs to be voted into the standards yesterday and why are you guys saying mean things about C++ profiles...
C++ profiles is a magic solution to the problem. As one committee member noted, there's not enough description of profiles yet to even figure out if it can be implemented or not. Instead, it's just a vague, well, compile with -fsanitize=address, compile with fortify-source, use hardened malloc, that makes my code memory-safe, right? And for as long as profiles remains a magic solution to check a box, it will remain vaporware in practice.
One of the real risks I see in the C++ committee is that they seem to want to drive all of the implementers out of the room.
Spot on, since C++11 the committee has increasingly started to design and add features to the standard without any kind of implementation, only after the standard gets ratified, the implementers eventually find out that the design is broken, or has flaws.
A strange phenomenon akin to the Algol 68 days, and no other ISO based language is taking, where standardising existing practice or having a full test implementation is still pretty much what it being done.
How many export templates, GC, type traits defect fixes, volatile behaviour changes, modules, contracts,.... can implementers still put up with?
The committee got burned, extremely badly, by C++03's export templates, where it got standardized without an implementation, and everyone realized it was basically unimplementable, and the only group that did, wrote a paper telling everyone not to [1]
This was well in mind when C++11 came around (especially since C++11 slipped so badly: it was originally "C++0x", and then they ran out of digits for x). By the time we hit C++20, I think the lessons were lost, especially when it came to there being two competing modules implementations and the committee deciding to select neither.
1: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n14...
Wow, that paper is absolutely damning.
> Design: 1.5 years (elapsed) to come up with a design they believed they could implement.
> Development: 3 person-years (3 people × >1 year each)
> (Note: By comparison, implementing the complete Java language from scratch took the same team 2 person-years.
> I am actually much more pessimistic about Profiles than Simone.
Likewise. Apparently Stroustrup wrote his first Profiles paper two years before HN existed. That's an incubation period long enough to wonder about its value, for multiple values of "value."
I’m very disconnected from the C/++ world, so when I was reading about profiles in another comment here (the one about the committee essentially saying “Safe C++ is off the table”), profiles sounded like something that was in an early phase of planning and implementation. Then the linked Reddit thread mentioned them being a thing in 2015 and I thought “oof they’ve been on profiles for a minute and it’s this underdeveloped?” and now your comment - yikes!
There was a time when I was learning programming where my options were what the library offered: books on C/++, Java, obscure X extensions, Assembly, and BASIC. I quickly found that C just didn’t “click” for me and the JRE/JDK was a real pain to work with on a slow internet connection and no guidance.
The only thing that’s made me want to go back to trying systems programming is the existence of Rust - I like the concept of the borrow checker and the added safety in general, and the community I’ve seen online seems a lot more inviting and friendly than I ever felt trying to find C++ resources.
All that said, the conversation around core safety in C++ and the community’s reaction to “workarounds” (like offloading functionality to Rust) makes me want to just never step back in to that particular world. There’s no appeal I can see about loading a footgun and putting on a blindfold unless you need to work in that world.
As discussed multiple times, I agree with the sentiment.
I think we are reaching a phase where C++ won't be going away, as it is quite relevant in many fields, however the two languages approach will keep increasing, and many will consider C++26 good enough for such scenarios.
C++26 and not lower, due to reflection.
I am certain anything else beyond C++26 will only be considered by hardcore C++ shops that culturally won't ever use anything else, besides scripting for builds and OS automation tasks.
Regardless of the technology the big thing Rust has that C++ does not is safety culture, and that's dominant here.
True. So many proposals have gone by over the years. Here's one of mine from 2001.[1] Bad idea. The layers of cruft in C++ have become so deep that it's a career just to understand them.
DARPA has something called the TRACTOR program, "Translate All C to Rust". It's been underway for a year, and they have a consortium of universities working on it. Not much, if anything, has come out. Disappointing.
Rust is probably too hard. I write 100% safe Rust, and there are times when I hit an ownership structure wall and have to spend several days re-planning. So far I've always succeeded without using "unsafe" or indices, but it drags down productivity.
Although object-oriented programming is out of fashion, classes with inheritance are useful. It's really hard to do something comparable in Rust. Traits are not that helpful for this.
Go is a good compromise. Safety at a minor cost in performance. Go is good enough for web back end stuff. Go has both GC and "green threads". This automates the problems that wear people down in C++ and Rust.
[1] https://www.animats.com/papers/languages/cppstrictpointers.h...
> So far I've always succeeded without using "unsafe" or indices, but it drags down productivity.
There is a common perception that Rust is less productive than competing languages, but empirical research by Google and others has found this to be wrong. Rust just shifts the effort earlier in the development phase, where the costs are often orders of magnitude lower. You may spend a few hours struggling with the borrow checker, but that saves you countless days of debugging highly non-trivial defects, especially in a larger codebase.
> Although object-oriented programming is out of fashion, classes with inheritance are useful. It's really hard to do something comparable in Rust. Traits are not that helpful for this.
FWIW, "classes with inheritance" in Rust can be very elegantly modeled with generic typestate. (Traits are used as part of this pattern, but are not the full story.) It might look clunky at first glance, but it accurately reflects the underlying semantics.
> Rust just shifts the effort earlier in the development phase, where the costs are often orders of magnitude lower.
That works fantastically when you're rewriting something - you already have the idea and final product nailed down.
It works poorly when you don't have everything nailed down and might switch a lot of stuff around, or remove stuff that isn't needed, etc.
> It works poorly when you don't have everything nailed down and might switch a lot of stuff around, or remove stuff that isn't needed, etc.
I do prototype applications in Rust and it involves heavy refactoring including deletions. Those steps are the easiest ones for me and rarely gives me any headache. Part of the reason are the interfaces that you're forced to define clearly early on. Even the unrelated friction of satisfying the borrow checker gently nudge you towards that.
The real problems are often caused by certain operations that the type system can't prove to be safe, even when they are. For example, you couldn't write async closures until recently. Such situations often require lots of thought to resolve. You may have to restructure your code or use a workaround like RC variables.
The point is, these sorts of assumptions often don't seem to hold in practice, at least in my experience. My personal experience doesn't agree with the assertion that prototyping is hard in Rust.
I think that's actually where Rust can shine -- it's very good at refactoring, so when you move stuff around and cut things out that you don't need, the compiler tells you exactly how to put everything back together and what exactly needs to be changed. As the codebase grows, refactorability becomes crucial, because refactors are risky and can fail, causing major schedule disruptions. High code velocity achieved early on by ignoring reference lifetimes and borrows and type checking early on might feel good, but these features are a detriment later one when the project needs to start making guarantees.
> It works poorly when you don't have everything nailed down and might switch a lot of stuff around
If you're prototyping code you can just do defensive .clone() calls and use Rc<> to avoid borrow checker issues. You don't need maximum efficiency, and the added boilerplate doesn't hurt that much: in fact, it helps should you want to refactor the code later.
> Rust just shifts the effort earlier in the development phase […]. You may spend a few hours struggling with the borrow checker.
And by the time you got it figured out, the requirements change, and you’re back to struggling with the borrow checker.
> that saves you countless days of debugging highly non-trivial defects, especially in a larger codebase.
Seems like many projects never get to the point where the architecture toil pays off. Instead, they spend 80–90% of their time trying to find the perfect architecture, which is then brittle against change.
> You may spend a few hours struggling with the borrow checker, but that saves you countless days of debugging highly non-trivial defects, especially in a larger codebase.
How often do you had issues like that in the last 10 years of your work? Questionable if it's really cheaper for everyone. Other question, is there any really large Rust codebase out there thats older than 10 years? That had time to gather crust of tons of developers to compare with the appropriate C++ codebases? I don't think so.
The biggest issue with Rust that I have found is that there are phase changes where making small changes to the code becomes impossible and you must completely redesign the program for it to work with the borrow checker.
TRACTOR is currently proceeding. The program is structured in phases. Each phase will present the participants with increasingly difficult challenges to translate. At the end of each phase the participants will be tested and the results of these tests will be publicly announced. The first phase of TRACTOR began in June and will run for six months.
> So far I've always succeeded without using "unsafe" or indices, but it drags down productivity.
I really don’t understand this perspective. The whole philosophy of Rust is one where you document why “unsafe” is safe. It is not and never has been a goal to make everything safe because that is an impossible goal to merge with high performance systems language because hardware itself is unsafe. It’s why the unsafe keyword exists. If that wasn’t the goal, unsafe wouldn’t.
If unsafe is not used, then no one has to determine whether the unsafe parts are actually safe.
Sure, but taken to an extreme you see the absurd degree you have to contort yourself. And that’s to the current version of the proof checker - some unsafe’s are even only temporary until a better prover comes by.
You shouldn’t go out of your way to use unsafe, but between that and 2 weeks refactoring, I’ll take the unsafe and use tools like miri or ASAN to provide extra guards. Engineering is inherently about making practical choices.
I just start everything with
It can be preferable to avoid unsafe when reasonable to do so. Programmers are merely human and will make a mistake at some point, and by avoiding unsafe you at least get the guarantee that the buggy behaviour is sound and (aside from race conditions) more predictable.
I think you misunderstand me. I’m not saying that unsafe should be the first thing you reach for. But if you can’t find an easy way to express it safely, and the only path visible is a time costly refactor, it can still be the most cost effective approach and shouldn’t be ignored.
Even ignoring the practicality approach, there’s a reason you see it in things like crossbeam or zerocopy - not everything worthwhile expressing can even be expressed in purely safe code wether because of performance or purely because the ownership lifetime cannot be understood due to the limitations of the borrow checker even when it is indeed safe code.
This is exactly how technical debt accumulates. You can write a comment documenting that "unsafe is actually safe here in routine X because Y will always do Z," but if the code lives long enough, someone may eventually change X or Y in a way that falsifies that claim, or try to cut-and-paste X's code into a new context without those guarantees. No, that's not /prudent/, but it happens nevertheless, and the costs of tracking down and fixing the errors are almost always higher than just implementing more conservatively in the first place.
Judging by this https://vt.social/@lina/113056457969145576 rewriting any even remotely complex project to rust will require making decisions (function signatures, ownership and so on) based on information that might not be present in the C code at all, like API conventions. Translator being able to decide on all these things automatically would probably be quite close to solving the halting problem.
Sigh. As I point out occasionally, the halting problem very rarely comes up in practice. Combinatorial explosion, yes, but not actual undecidability.
Understanding the implicit constraints of C/C++ functions is something where an LLM can help. Once you have the constraints recorded, they become formal constraints at the call. Historic C isn't expressive enough for even basic constraints.
Most of the constraints mentioned are at least expressible in Rust.
I dislike Go's minimalism, however it fits something I have been saying for years.
Many languages that predated Java and C#, already had everything that Go offers and then some.
Modula-3, Oberon, Oberon-2, Active Oberon, Component Pascal, Eiffel.
Had Java and C#, just like those, had full support for AOT compilation, value types and the same low level programming capabilities, and many stuff that was still written during 2000-2010 in C or C++ would not have happened, and maybe C++11 would not have been as relevant as it was.
During that decade many people kept writing C or C++, because they lacked mainstream alternatives for AOT compiled languages, and not because they were into low level systems programming.
> The layers of cruft in C++ have become so deep that it's a career just to understand them.
The committee have, over decades, dug themselves into a hole that they won’t be able to get out of, even if they wanted to.
C++ does have some features that I appreciate, but for every new project I begin it has to contend with Rust and OCaml, and most of the time it loses.
It will stay relevant for existing projects, but becoming the language of choice for new projects will only get harder with time.
> Go is a good compromise. Safety at a minor cost in performance. Go is good enough for web back end stuff. Go has both GC and "green threads".
Or, for even better performance, you can use Java (24+), which has both "green threads" and a more advanced and performant GC and compiler than Go.
I think something less obvious to people is that type inheritance in C++ has several uses outside of building naive object hierarchies. Even if your model is based on composition as is typically the case these days, inheritance is a useful tool for expressing some metaprogramming mechanics and occasionally literal old style inheritance is actually the right thing to do. You don’t need it most of the time but sometimes not having it makes everything much uglier.
As of C++20 in particular, C++ has taken on a very traits-y character if you go all-in on the new language features.
The way out for C++ is probably to lean into compile-time codegen and verification within the language which is already a pretty unique capability. It dramatically reduces the lines of code a developer has to write. Defect rates closely track lines of code written regardless of the language so large improvements in compile-time expressiveness is a pretty big win.
Sadly we got concepts lite instead of C++0x concepts, so while better than SFINAE or tag dispatch, it is still half solution, and it won't get better because those behind it, eventually went on to Swift, and nowadays Hylo.
I think there is room for an ML with a modern toolchain story that just omits Rust's borrow checker and does something more boring. Typescript and Rust have primed a large number of developers to be open to it.
this is kinda the opposite of what i think people really want. they want an LLL with borrow checking without all of the abstractions and baggage you get in rust.
Most of those abstractions and baggage come from the need to be able to represent and propagate lifetime constraints, though.
proc macros? Optional?
Isn't ReasonML pretty much that language already? Although the most popular language in that broader niche is probably Golang.
> Go is a good compromise
Go doesn't prevent data races.. besides the borrow checker, the thing that makes Rust special is the Send and Sync traits
And I would say the deficiencies in Profiles and the fact Safe C++ was killed is the technical decisions reflecting the culture problem.
> The profiles technology isn't very good.
Can you be very specific about why?
Here's the argument for why profiles might work: with all of the profiles enabled, you are only allowed to use the safe subset of C++ and all of the unsafe stuff is hidden behind APIs whose implementations don't have those profiles enabled. Those projects that enable all profiles by default effectively get Swift-like or Rust-like protection.
Like, you could force all array operations to use C++ stdlib primitives, enable full hardening of the stdlib, and then have bounds safety.
And you could force all lifetime operations to use C++ stdlib refcounting primitives, and then have lifetime safety in a Swift-like way (i.e. eager refcounting everywhere).
I can imagine how this falls over but then it might just be a matter of engineering to make it not fall over.
(I'm playing devils advocate a bit since I prefer Fil-C++.)
I illustrate why it won't work with a number of examples here: https://www.circle-lang.org/draft-profiles.html
To address your points: 1. The safe subset of C++ is too small to do anything with. 2. The Standard Library is not written in the safe subset.
My favorite example from the above paper is the problem of std::sort -- the compiler has no idea if both operands are iterators into the same allocation. The function is fundamentally unsafe. Which C++ profile do you turn on to make that safe? Does it ban use of std::sort? Does it ban use of all <algorithms>, all of which work on pointers/iterators that are susceptible to use-after-free UB?
The whole Standard Library is unsafe. I proposed a rigorously safe std2, and that was rejected. And now you propose a safe std2 (using refcounting primitives)--why would that fare better? What does Profiles actually propose? No change in existing code. The compiler simply finds all UB. Right.
If that is what profiles were actually doing, it would probably make sense. But it's not what profiles are doing.
Instead, for example, the lifetime safety profile (https://github.com/isocpp/CppCoreGuidelines/blob/master/docs...) is a Rust-like compile time borrow checker that relies on annotations like [[clang::lifetimebound]], yet they also repeatedly insist that profiles will not require this kind of annotation (see the papers linked from https://www.circle-lang.org/draft-profiles.html#abstract).
Their messaging is just not consistent with the concrete proposals they have described, let alone actually implemented.
Additionally they ignore field experience, I can tell that on VC++ the lifetime checker only has worked in small examples, as I was really into trying it out.
Microsoft even has blog posts admitting that only with SAL like annotations it can be improved, while keeping the usual C++ semantics.
Yet WG21 has ignored this field experience.
Yes I can be specific.
Firstly, you need composition. Rust's safety composes. The safe Rust library for farm animals from Geoff, the safe Rust library for cooking recipes by Alice and the safe Rust library for web server by Bert together with my safe program code adds up to my safe Rust farm foods web site.
By having N profiles, where N is intended to be at least five and might grow arbitrarily and be user extensible, C++ guarantees it cannot deliver composition this way.
Maybe they can define some sort of composition and maybe everybody will ship software which conforms to that definition and so eventually they get composition, that's not there today, so it's just a giant unknown at best.
Secondly, of the profiles described so far, most of them are just solving parts of the single overarching problem Rust addresses, for the serial case. So if they ship that, which already involves some amount of new work yet to be finished, you need all of those profiles to get to only partial memory safety.
Which comes to the third part. Once you start down this path, as they found, you realise you actually want a borrowck. You won't call it that of course, because that would be embarrassing. But you'll need to track reference lifetimes and you'll need annotation and you end up building most of the stuff you insisted you didn't want. For now, you can handwave, this is an unsolved static analysis problem. Well, not so much unsolved as you know the solution and you don't like it.
Your idea to do the reference counting everywhere is not something WG21 has looked at, I think the perf cost is sufficiently bad that they won't even glance at it. They're also not going to ship a GC.
Finally though, C++ is a concurrent language. It has a whole memory model which doesn't even make sense if you aren't thinking about concurrency. But to deliver concurrent memory safety without Fil-C's overheads you would want... well, Rust's Send and Sync traits, which sure enough have eerie twins in the Safe C++ proposal. No attempt to solve this is even hinted at in the current profiles proposal, and they would need to work one out and if it's not Send + Sync again they'd need to prove it is correct.
+1 ... Rust has done pretty much the minimal thing that one needs to write C/C++ like programs safely... things must fit together to cover all scenarios (borrow checker / mut / send / sync / bounds checking). Especially for multithreading.
C++ / profiles will not be able to do much less or much different to achieve the same goals.
I think the point is that folks will incrementally move their code towards having all profiles enabled, and that's sort of fundamental if the goal is to give folks with C++ codebases an incremental path to safety. So I don't buy your first and second points.
> Which comes to the third part. Once you start down this path, as they found, you realise you actually want a borrowck.
That's a bold statement. It might be true for some very loose definition of "borrow checker". See the super simple static analysis that WebKit uses (that presentation is now linked in at least two places on this HN discussion, so I won't link it again).
> Your idea to do the reference counting everywhere is not something WG21 has looked at, I think the perf cost is sufficiently bad that they won't even glance at it. They're also not going to ship a GC.
The point isn't to have ref counting on every pointer at the language level, but rather: if your prevent folks from calling `delete` directly (as one of the profiles does) then you're effectively forcing folks to use smart pointers.
Reference counting that happens by smart pointers is something that they would ship. We know this because it's already happened.
I imagine this would really mean that some references are ref counted (if you use shared_ptr or similar) while other references use some other policy.
> Finally though, C++ is a concurrent language. It has a whole memory model which doesn't even make sense if you aren't thinking about concurrency. But to deliver concurrent memory safety without Fil-C's overheads you would want... well, Rust's Send and Sync traits
Yeah, this might be an area where they leave a hole. Like, you might have reference counting that is only partially thread safe:
- The refcount of any object is atomic.
- The smart pointer itself is racy. So, racing on pointers can pop the protections.
If they got that far, then that wouldn't be so bad. The marginal safety advantage of Rust would be very slim at that point.
> I think the point is that folks will incrementally move their code towards having all profiles enabled, and that's sort of fundamental if the goal is to give folks with C++ codebases an incremental path to safety.
I doubt it, because the reason I favoured C++ over C back in 1993, was the safety culture, as someone coming from Turbo Pascal.
Somehow this has been deteriorating since 2000, as C++ kept getting C refugees that would rather keep using C, but work required C++ now.
Most of the hardening capabilities that are being added now, were already part of the compiler frameworks during the 1990's, e.g. Turbo Vision, OWL, MFC, CSet++, MFC, MacApp, PowerPlant,...
So you agree then, it's technically not as good. With a lot of extra work that nobody has signed up to do, and some of which is speculative, they can't quite get to where Safe C++ was when proposed.
As far as I'm concerned, there are two main issues with profiles:
1. They're either unimplementable or useless (too many false positives and false negatives).
I think this is pretty evident based on the fact that profiles have been proposed for a while and that no real implementation exists. Worse, out of all of the open source projects and for profit companies, noone has been able to implement any sort of static analysis that would even begin to approach the guarantees Rust makes.
2. The language doesn't give you any tools to actually write safe code.
Ok, let's say that someone actually implements safety profiles. And it highlights your usage of a standard library type. What do you do?
Safe C++ didn't require a new standard library just because. The current stdlib is riddled with safety issues that can't really be fixed and would not be fixed because of backwards compatibility.
You're stuck. And so you turn the safety profile off.
My limited understanding is. There is no safe subset (That's what was just discontinued, profiles are the alternative.)
And C++ code simply doesn't have the necessary info to make safety decisions. Sean explains it better than I can https://www.circle-lang.org/draft-profiles.html
The analysis you link to is insufficient.
E.g., the first case is "Inferring aliasing". He presents some examples and states, "The compiler cannot infer a function’s aliasing requirements from its declaration or even from its definition."
But why not?
The aliasing requirements come directly from vector. If the compiler has those then determining the aliasing requirements of those functions is straightforward.
Now, maybe there is some argument that a C++ compiler cannot determine the aliasing requirements of vector, but if that's the claim, then the paper should make it, and back it up.
The paper continues in the same vein in the next section, as if the lifetime requirements of map and min cannot be known or cannot bubble up through the functions that call them.
As written, the paper says almost nothing about the feasibility of static analysis of C++ to achieve safety goals for C++.
I imagine it's (implicitly?) referring to avoiding whole-of-program analysis.
For example, given a declaration
What's the relationship between the return value and the input? You can't know without diving into 'func' itself; they could be the same pointer or it could return a freshly allocated pointer, without getting into the even more esoteric options.Trying to solve this without recursively analysing a whole program at once is infeasible.
Rust's approach was to require more information to be provided by function definitions, but that's new syntax, and not backwards compatible, so not a palatable option for C++.
> avoiding whole-of-program analysis
Why, though?
Perhaps it's unfeasibly complex? But if that's the argument, then that's an argument that needs to be made. The paper sets out to refute the idea that C++ already has the information needed for safety analysis, but the examples throw away most of the information C++ does have, without explanation. I can't really take it seriously.
In general, there are three reasons to avoid whole program analysis:
1. Complexity. This manifests as compile times. It takes much longer.
2. Usability. Error messages are poor, because changes have nonlocal effects.
3. Stability. This is related to 2. Without requirements expressed in the signature, changes in the body change the API, meaning keeping APIs stable is much harder.
There’s really a simple reason why it’s not fully feasible in C++ though: C++ supports separate compilation. This means the whole program is not required to be available. Therefore you don’t have the whole program for analysis.
It's not even required for the information to be present at link time; C/C++ doesn't require the pointer to always be owned or not-owned, it's valid for that to be decided by configuration loaded at runtime. Or for it to be decided at random.
Trying to establish proofs that the pointer is one way or the other can't work, because the pointer doesn't have to be only one or the other.
The fact that you then have to treat the pointer one way or the other is a problem; if you reduce the allowed programs so that the pointer must be one of the two that's a back-compat hazard. If you don't constrain it, you need to require additional information be carried somewhere to determine how to treat it.
If you do neither, you don't have the information needed to safely dispose of the pointer.
Local reasoning is the foundation of everything formal (this includes type systems) and anyone in the type-system-design space would know that. Graydon Hoare (ex-rust dev) wrote a post about it too (which links to another great without-boat's post in the very first line): https://graydon2.dreamwidth.org/312681.html
The entire point of having a static-type-system, is to enable local reasoning. Otherwise, we would just do whole program analysis on JS instead of inventing typescript.
The Profiles authors are the ones claiming this uses local analysis only: https://news.ycombinator.com/item?id=41942126
They are clear that Profiles infers everything from function types and not function bodies. Obviously that won't work, but that's what they say.
In that post (I think your own?) it says, "Local analysis only. It's not looking in function definitions."
But local analysis means analysis of function definitions. At least it does to me. I can't think of what else it could mean. I think there must be some aspect of people talking past each other here, using the same words to mean different things.
Further, I don't think local analysis of the code comprising a function means throwing away the results of that analysis rather than passing it up the line to the analysis of callers of the function. E.g., local analysis of std::sort would establish its aliasing limitations, which would be available to analysis of the body of "f1" from the example in the paper (the results of which, in turn, would be available to callers of f1).
Now, I don't know if that's actually feasible/workable without the "heavy" annotation that C++ profiles wants to forbid. That's the key question to me.
> with all of the profiles enabled, you are only allowed to use the safe subset of C++ and all of the unsafe stuff is hidden behind APIs whose implementations don't have those profiles enabled.
This is not the goal of profiles. It’s to be “good enough.” Guaranteed safety isn’t in the cards.
> This is not the goal of profiles. It’s to be “good enough.” Guaranteed safety isn’t in the cards.
- Rust isn’t totally guaranteed safe since folks can and do use unsafe code.
- Exact same situation in Swift
- Go has escape hatches like it you race, but not only.
So most “safe” things are really “safe enough” for some definition of “enough”.
You’re misunderstanding what I’m saying. Safe Rust guarantees memory safety. Profiles do not. This is regardless of the ability of the unchecked versions, on both sides, to introduce issues.
Profiles do not, even for code that is 100% using profiles, guarantee safety.
The kind of "safe Rust" where you never use `unsafe` and never call into a C library is theoretical. None of the major ports of software to Rust achieve that.
So, no matter what safe language we talk about, "safety" always has its caveats.
Can you be specific about what missing safety feature of profiles leads you to be so negative about them?
No, I am saying that safe rust says “if unsafe is correct, safe rust means memory safety.” Profiles does not even reach that bar, it says “code under profiles is safer.”
It’s not about specifics, it’s about the stated goals of profiles. They do not claim to prove memory safety even with all of them turned on.
You've misunderstood what Steve is saying, and what safe/unsafe means in Rust. In Rust, if I have a block of code that doesn't use any operations that require the unsafe keyword, then I am guaranteed (modulo compiler bugs) that this block of code is free of all undefined behaviour.
It does not guarantee that code in any function being called within that block is free of it, but it does guarantee this block of code is.
Profiles don't give you that.
> The kind of "safe Rust" where you never use `unsafe` and never call into a C library is theoretical. None of the major ports of software to Rust achieve that.
An entire program ported to Rust will call into unsafe APIs in at least a few places, somewhere down the call stacks.
But you'll still have swathes of code that doesn't ultimately end up calling an unsafe API, which can be trivially considered memory safe.
The language standard assumes that everyone collectively agrees to standard semantics implying certain things. If users don't follow the rules and write something without semantics (undefined behavior), the entire program is meaningless as opposed to just the bit around the violation. You know this, so I emphasize it here because it's entirely incompatible with the view that "good enough" is a meaningful concept to discuss from the PoV of the standard.
Rust does a pretty good job formalizing what the safety guarantees are and when you can assume them. Other languages don't, but they also don't support safety concepts that C++ nominally does like safety critical systems. "Good enough" can be perfectly fine for a web service like Go while being grossly inadequate for HPC or safety critical.
> And you could force all lifetime operations to use C++ stdlib refcounting primitives, and then have lifetime safety in a Swift-like way (i.e. eager refcounting everywhere)
That's going to be a non-starter for 99% of serious C++ projects there. The performance hit is going to be way too large.
For bounds checking, sure I think the performance penalty is so small that it can be done.
You have to realize that the number of locations in code where reference counter adjustment is actually meaningful is rather small and there are simple rules to keep the excess thrash from reference counting pointer wrappers to a minimum. The main one, as mentioned in the talk the sibling comment called out, is that it is OK to pass a raw pointer or reference to a function while holding on to a reference count for as long as that other function runs (and doesn't leak the pointer through a side effect). This rule catches a lot of pointless counter arithmetic through excessive pointer wrapper copying.
Maybe C++ should copy some Swift, before attempting to challenge Rust.
It has already multiple times,
Managed C++, C++/CLI, C++/CX, C++ Builder, Unreal C++
But those aren't extensions or approaches WG21 cares about having.
The C++11 GC design didn't even took those experiences into consideration, thus it got zero adoption, and was dropped on C++20.
That would have been my first guess but WebKit's experience doing exactly this is the opposite.
See https://www.youtube.com/watch?v=RLw13wLM5Ko
Note that they also allowed other kinds of pointers so long as their use could be statically verified using very simple rules.
> For bounds checking, sure I think the performance penalty is so small that it can be done.
Depends on how many times it's inlined and/or if it's in hot code. It can result in much worse assembly code.
Funny thing: C++17 string_view::substr has bound check + exception throw, whereas span::subspan has neither; I can see substr's approach being problematic performance- and code-size-wise if called many times yet being validated by caller.
There'd be less opposition if profiles worked that way. The real goal is to define a subset that excludes 95% of the unsafe stuff, as opposed to providing hard guarantees.
Rust has safety culture? Not in the wild, a lot of coders seem to think its cool to use unsafe to get an extra hairpin of performance at the cost of safety
[dead]
The first four letters of "culture" are certainly right.