It is not saying that it is not possible to make C code that is safe enough to be on an airplane. Its that there are languages with additional features which make it easier to have a high confidence. If you can remove entire classes of bugs automatically, why not do so?
It matters less than you think. The entire point of e.g. DO-178C is to achieve high assurance that the code performs it's intended function. Basically, you need to be able to trace your object code to system level requirements. You need to achieve 100% MC/DC coverage from requirements based testing. If there's something you don't cover you either remove the code, or add derived requirements (which need to be assessed by the systems safety process). Language choice doesn't remove any of the objectives you need to achieve.
Also, keep in mind that the desire to have a deterministic system puts a lot of constraint on what kind of behavior you can program anyway.
Here’s an example how safety-critical C is written and formally verified: https://www.absint.com/
Based on what I know about Rust, it’s harder to write Rust to that same level of confidence, but I haven’t kept up with their safety-critical initiative.
> Its that there are languages with additional features which make it easier to have a high confidence. If you can remove entire classes of bugs automatically, why not do so?
Which languages remove which classes of bugs entirely? This vagueness is killing me
Safe Rust and Ada SPARK entirely remove classes of bugs like undefined behavior and memory safety issues. The latter will also statically eliminate things like overflow and type range errors.
These are subsets of their respective languages, but all safety critical development in C and C++ relies on even more constrained language subsets (e.g. MISRA or AV++) to achieve worse results.
> These are subsets of their respective languages, but
Pretty much every language has such a subset. Nothing new then, sigh...
C and C++ don't have such a subset. That seems pretty relevant, given they're the languages being compared and they're used for the majority of safety critical development.
The standards I mentioned use tricks to get around this. MISRA, for example, has the infamous rule 1.3 that says "just don't do bad things". Actually following that or verifying compliance are problems left completely to the user.
On the other hand, Safe Rust is the default. You have to go out of your way to wrap code in an unsafe block. That unsafe block doesn't change the rules of the language either, it just turns off some compiler checks.
You mean memory-safe Rust is the default.
Taking this default is not enough to write safety-critical software… but it’s enough to write a browser (in theory) or some Android core daemons.
Unfortunately, no. "Memory safe rust" is a more general concept than "Safe Rust". "Safe rust" is a generally understood term for the subset of rust that's everything outside unsafe blocks. Here's an example where it's used in the language docs [0]. "Memory safe rust" also includes all the unsafe code that follows the language rules, which is ideally all of it.
I can see how this would be confusing and probably should have been clarified with emphasis in the original comment. Safety in the sense of "safety critical" isn't a property any programming language can have on its own, so I wouldn't have intended that regardless.
[0] https://doc.rust-lang.org/nomicon/meet-safe-and-unsafe.html
Memory safety doesn't really help that much with functional safety.
Sure, a segfault could potentially make some device fail to do its safety critical operation, but that is treated in the same way a logic bug would be, so it's not really a concern in of itself.
But then again, an unchecked .unwrap() would lead to the same failure mode, so a "safe" crash just just as bad as an "unsafe" one.
Memory safety (as defined by Rust) actually goes a very long way to help with functional safety, mostly because in order to have a memory safe language, you need a number of additional language features that generally aid with correctness.
For example, lifetimes are necessary for memory safety in Rust, but you can use lifetimes much more generally to express things like "while this object exists, this other object is inaccessible", or "this thing is strictly read-only under these particular conditions". That's very useful.
But memory-unsafe code doesn't just segfault, it can corrupt your invariants and continue running, or open a door for an attacker to RCE on the machine. Memory safety is necessary (but not sufficient) to uphold what should be the simplest invariant of any code base, that program execution matches the source code in the first place.
C and C++ don't have such subset defined as part of their standard. Left to users means left to additional tools, which do exist. Rust only has memory safety by default, this is a small part of the problem and it is not clear to me that having this helps with functional safety. (Although I agree that it helps elsewhere).
I'd be happy to explain at length why the existing tools and standards are insufficient if you want. It'd be easier to have that discussion over another medium than HN comment chain though.
If you think a strong and convenient type system helps with functional safety, then Rust helps with functional safety. This is also generally the experience in the industry.
I am not convinced a strong type system helps with functional safety and I am not even deeply impressed by Rust's type system. The scientific literature does even seem even that clear about whether a strong type system substantially reduces software defects in general. I believe in proofs though. I generally believe complexity is bad and both C++ and Rust are too complex for my taste. I also think Rust has severe supply chain issues.
This is comparing C, C++, ada, Spark and Rust.... I think its obvious.