I know there is a belief that Rust/Ada etc is safer than C/C++ and in some cases that is true. I know of multiple, airworthy aircraft that are flying with C++ code. I also know of aircraft flying with Ada. The aircraft flying with Ada is hard to maintain. There is also a mountain of testing that goes into it that is not just unit testing. This mountain of integration, subsystem and system level testing is required regardless
I've never worked in aerospace, however I'm interested in safety-critical software engineering, and I've written a lot of Ada (and written about my experiences with it too). My understanding is that yes you can write safety-critical code in just about any language, but it's much easier to prove conformance to safety standards (like DO-178C, etc.) in Ada than in C++.
I regularly do hobby bare-metal programming in both Ada and C. I find that Ada helps prevent a lot of the common footguns in C. It's not a silver bullet for software safety, but it definitely helps. The point of any programming language is to make doing the right thing easy, and doing the wrong thing hard. All things considered, I think Ada does a good job of that.
It is not saying that it is not possible to make C code that is safe enough to be on an airplane. Its that there are languages with additional features which make it easier to have a high confidence. If you can remove entire classes of bugs automatically, why not do so?
It matters less than you think. The entire point of e.g. DO-178C is to achieve high assurance that the code performs it's intended function. Basically, you need to be able to trace your object code to system level requirements. You need to achieve 100% MC/DC coverage from requirements based testing. If there's something you don't cover you either remove the code, or add derived requirements (which need to be assessed by the systems safety process). Language choice doesn't remove any of the objectives you need to achieve.
Also, keep in mind that the desire to have a deterministic system puts a lot of constraint on what kind of behavior you can program anyway.
Here’s an example how safety-critical C is written and formally verified: https://www.absint.com/
Based on what I know about Rust, it’s harder to write Rust to that same level of confidence, but I haven’t kept up with their safety-critical initiative.
> Its that there are languages with additional features which make it easier to have a high confidence. If you can remove entire classes of bugs automatically, why not do so?
Which languages remove which classes of bugs entirely? This vagueness is killing me
Safe Rust and Ada SPARK entirely remove classes of bugs like undefined behavior and memory safety issues. The latter will also statically eliminate things like overflow and type range errors.
These are subsets of their respective languages, but all safety critical development in C and C++ relies on even more constrained language subsets (e.g. MISRA or AV++) to achieve worse results.
> These are subsets of their respective languages, but
Pretty much every language has such a subset. Nothing new then, sigh...
C and C++ don't have such a subset. That seems pretty relevant, given they're the languages being compared and they're used for the majority of safety critical development.
The standards I mentioned use tricks to get around this. MISRA, for example, has the infamous rule 1.3 that says "just don't do bad things". Actually following that or verifying compliance are problems left completely to the user.
On the other hand, Safe Rust is the default. You have to go out of your way to wrap code in an unsafe block. That unsafe block doesn't change the rules of the language either, it just turns off some compiler checks.
You mean memory-safe Rust is the default.
Taking this default is not enough to write safety-critical software… but it’s enough to write a browser (in theory) or some Android core daemons.
Unfortunately, no. "Memory safe rust" is a more general concept than "Safe Rust". "Safe rust" is a generally understood term for the subset of rust that's everything outside unsafe blocks. Here's an example where it's used in the language docs [0]. "Memory safe rust" also includes all the unsafe code that follows the language rules, which is ideally all of it.
I can see how this would be confusing and probably should have been clarified with emphasis in the original comment. Safety in the sense of "safety critical" isn't a property any programming language can have on its own, so I wouldn't have intended that regardless.
[0] https://doc.rust-lang.org/nomicon/meet-safe-and-unsafe.html
Memory safety doesn't really help that much with functional safety.
Sure, a segfault could potentially make some device fail to do its safety critical operation, but that is treated in the same way a logic bug would be, so it's not really a concern in of itself.
But then again, an unchecked .unwrap() would lead to the same failure mode, so a "safe" crash just just as bad as an "unsafe" one.
Memory safety (as defined by Rust) actually goes a very long way to help with functional safety, mostly because in order to have a memory safe language, you need a number of additional language features that generally aid with correctness.
For example, lifetimes are necessary for memory safety in Rust, but you can use lifetimes much more generally to express things like "while this object exists, this other object is inaccessible", or "this thing is strictly read-only under these particular conditions". That's very useful.
But memory-unsafe code doesn't just segfault, it can corrupt your invariants and continue running, or open a door for an attacker to RCE on the machine. Memory safety is necessary (but not sufficient) to uphold what should be the simplest invariant of any code base, that program execution matches the source code in the first place.
C and C++ don't have such subset defined as part of their standard. Left to users means left to additional tools, which do exist. Rust only has memory safety by default, this is a small part of the problem and it is not clear to me that having this helps with functional safety. (Although I agree that it helps elsewhere).
I'd be happy to explain at length why the existing tools and standards are insufficient if you want. It'd be easier to have that discussion over another medium than HN comment chain though.
If you think a strong and convenient type system helps with functional safety, then Rust helps with functional safety. This is also generally the experience in the industry.
I am not convinced a strong type system helps with functional safety and I am not even deeply impressed by Rust's type system. The scientific literature does even seem even that clear about whether a strong type system substantially reduces software defects in general. I believe in proofs though. I generally believe complexity is bad and both C++ and Rust are too complex for my taste. I also think Rust has severe supply chain issues.
This is comparing C, C++, ada, Spark and Rust.... I think its obvious.
Firmware is a different story, but for controls code the proper and civilized way of working is using Simulink with something like Polyspace and Embedded Coder, and auto-gen verifiable C code from your model. I know that on HN vim + invoking CC is the only way of working, but industry began to move forward quite long ago.
Sadly, Mathworks have monopoly there.
Yes, by putting C and C++ into a straighjacket of formal verification and code practices that would make most complaints about Rust's borrow checker seem like child play.
What needs to be "maintained" in a flying aircraft? If it's in need of an update, why was it certified to fly that way in the first place?
Also in safety critical apps, being "difficult" can be a feature, not a big. Should we have easier turbofans so we can pop them open and swap out blades and rings for tiny little improvements? No. Every flight critical component should be fully understood as a prerequisite for use.
> why was it certified to fly that way in the first place?
Are you under the impression that software for aircraft is exceptionally good? A lot of the software for aircraft (for LRUs, avionics, whatever) are made by the same kind of developers as most other software.
You have no idea what you're talking about
Nearly 20 years in the aerospace industry, you're right, no clue.
There are new features or new subsystems to integrate which require ICD updates or bug fixes that need fixing.
What makes Ada harder to maintain? Do you have a source for that so I could read more?
Mostly non-technical things: continuity (or, rather, the lack thereof) and PR.
Continuity: Ada is not widely taught at universities, and, whilst the AdaCore’s GNAT Academic Program (GAP) does exist, one has to consciously seek out a university that offers a course in/on Ada. Ada and programming in Ada is not common knowledge, which stems from the next point.
PR. Ada, rightfully or wrongully, does not exactly bask in the limelight of popularity – most assuredly not to the same extent as Python, NodeJs, Typescript, C#/.NET etc do. The current generation of Ada developers do not care (and probably should not), and the young and future generations of potential Ada developers miss out. Ada is not talked about in diverse contexts spanning web development, frontend/backend[0] development, containers, cloud – and the list goes on. Not because Ada can't be used in any of the aforementioned contexts, it is just that due to the lack of PR it remains an unnoticed reality – kind of like «if a tree falls in a forest and no one is around to hear it, does it make a sound?»
[0] Yes, «frontend development» and «backend development» are the fancy terms in wide use that the new generation can easily understand.