> I'm just old and moany, and I need to step aside for bigger and better things such as Rust.
You are. This is firm "I don't want to have to learn new things" territory, which isn't a viable attitude in this industry.
In any case Rust is usually easier than C (excluding buggy C which is very easy to write), and certainly easier than actually learning the Git or Linux codebases.
I think it is often under appreciated by people who haven't worked in security how hard high quality C is in practice.
We might also have different priorities. I do not care too much that google and apple want to lock down their smartphone spyware and sales platforms. The supply chain risks and maintenance burden imposed onto me by the Rust ecosystem are much more of an concern.
I don't know what this has to do with locking down phones, but I do appreciate not getting compromised just for cloning a repo or opening my laptop at a coffee shop.
(There is a persistent idea that the lack of memory safety in C is good because it allows people to jailbreak their phones.)
This is not what I said, but memory safety is certainly not anything which is a high priority for my own security. I still think memory safety is important and I also think Rust is an interesting language, but... the hype is exaggerated and driven by certain industry interests.
Rust isn't popular just because of memory safety though. I think the memory safety message is maybe a little too loud.
It's also a modern language with fantastic tooling, very high quality library ecosystem and a strong type system that reduces the chance of all kinds of bugs.
It's obviously not perfect: compile time is ... ok, there aren't any mature GUI toolkits (though that's true of many languages), async Rust has way too many footguns. But it's still waaaaay better than C or C++. In a different league.
Rust is a nice language, but it pushed too aggressively with the argument of "memory safety" at all cost ignoring other considerations. And Cargo is certainly a disaster even though it may be considered "fantastic tooling" by some. In any case, I do not think it is funny that I now depend on packages without timely security update in my distribution. This makes me less secure.
Is there better tooling in C/C++? No snark intended?
I guess this depends on what you consider good tooling. I am relatively happy with C tooling. But if you want to quickly assemble something from existing libraries, then language-level package managers like npm, cargo, pip are certainly super convenient. But then, I think this convenience comes at a high cost. We now have worms again, I thought those times were long over... IMHO package management belongs into a distribution with quality control and dependencies should be minimized and carefully selected.
It can have supply chain attacks like npm... That high quality library system is also a liability.
I'm an industry interest, in the sense that I work in the software industry and I have an interest in Rust.
Fair enough. I just find it mind boggling how much money flows into completely new language ecosystems compared to improvements for C/C++ tooling which would clearly much more effective if you really cared about overall security of the free software world.
The issue with investing similar levels of effort into making C++ safer is the C++ standards committee doesn't want to adopt those kinds of improvements.
Which is also the reason why we don't have #pragma once and many other extensions like it. Except we do. Compilers can add rust-like static analyzers without the standard committee mandating it.
I am not interested in C++, it is also far too complex. In my opinion software needs to become simpler and not more complicated, and I fear Rust might be a step into the wrong direction.
Personally, I use Rust (and have been using it for close to 9 years) because I've been part of multiple teams that have delivered reliable, performant systems software in it, within a budget that would clearly be impossible in any other language. Rust acts as a step change in getting things done.
While I really really want devices I can own, I don't want to compromise security to do it. We need to do two things:
1. Lobby politicians to write laws that allow us to actually own the devices we bought.
2. Stop the FUD that a device that can be jailbroken is insecure. I heard this from our frigging CSO, of all people, and it's patently false, just FUD by Apple and Google who want you to be afraid of owning your device.
I want a device that's as secure as possible, but that I can own. I don't want to hack my own self just to get what I paid for.
It is a sad thing but I do root against secure boot initiatives because they almost entirely work to limit user's freedom instead of improving their security.
Thanks, that take is... Something. I'm all for user-controllable hardware but I think that's a regulatory problem not a technical one.
How often do you clone a repo and don't immediately run build commands that execute scripts provided by the repo.
Who says you do not? :)
Oh, I think it's a real problem, that's why I'm in favor of improved tools.
It's not "having to learn something new", but "having to be good at two things, both of which are full languages with their own specifics, problems and ways to solve them, two sets of compilers and some duct tape to hold them together.
It's like putting steak on a pizza... pizza is good, steak is good, pizza on a steak might be good too, but to actually do that in production, you now need two prep stations and you can't mess up either one.
> You are. This is firm "I don't want to have to learn new things" territory, which isn't a viable attitude in this industry.
It's viable, but limiting. Sometimes you have to do things you don't want to, which is why it's called work. But if you can choose what platforms you work on, you can orient towards things where things change less, and then you don't need to learn new things as often.
Chances are, if you get into the weeds in a lot of C programs, Rust is in your future, but it's viable to not want that, and to moan about it while doing it when you need to.
No one’s laying off COBOL programmers. Specialization has its upsides once the market isn’t saturated!
Well only because 99% of the world's COBOL developers were laid off decades ago (or switched to another language).
The more things change,
As someone with experience in this specific niche, yes they absolutely are. There are no longer ten thousand retail chains asking for COBOL-based counterpoint PoS mods on a yearly basis.
The COBOL market is basically tenured experts in existing systems or polyglots helping migrate the systems to VB or C# at this point. The market has plummeted and now it's in the final deflationary shrink before death.
Ah, damn, I’m sad to hear that. Always respected the language. :/
Technical debt is real tho and the rust-c interop is not the best ever.
Why not rewrite the entire git in rust and have two compatible versions?
[flagged]
Rust is over 10 years old now. It has a track record of delivering what it promises, and a very satisfied growing userbase.
OTOH static analyzers for C have been around for longer than Rust, and we're still waiting for them to disprove Rice's theorem.
AI tools so far are famous for generating low-quality code, and generating bogus vulnerability reports. They may eventually get better and end up being used to make C code secure - see DARPA's TRACTOR program.
The applicability of Rice's theorem with respect to static analysis or abstract interpretation is more complex than you implied. First, static analysis tools are largely pattern-oriented. Pattern matching is how they sidestep undecidability. These tools have their place, but they aren't trying to be the tooling you or the parent claim. Instead, they are more useful to enforce coding style. This can be used to help with secure software development practices, but only by enforcing idiomatic style.
Bounded model checkers, on the other hand, are this tooling. They don't have to disprove Rice's theorem to work. In fact, they work directly with this theorem. They transform code into state equations that are run through an SMT solver. They are looking for logic errors, use-after-free, buffer overruns, etc. But, they also fail code for unterminated execution within the constraints of the simulation. If abstract interpretation through SMT states does not complete in a certain number of steps, then this is also considered a failure. The function or subset of the program only passes if the SMT solver can't find a satisfactory state that triggers one of these issues, through any possible input or external state.
These model checkers also provide the ability for user-defined assertions, making it possible to build and verify function contracts. This allows proof engineers to tie in proofs about higher level properties of code without having to build constructive proofs of all of this code.
Rust has its own issues. For instance, its core library is unsafe, because it has to use unsafe operations to interface with the OS, or to build containers or memory management models that simply can't be described with the borrow checker. This has led to its own CVEs. To strengthen the core library, core Rust developers have started using Kani -- a bounded model checker like those available for C or other languages.
Bounded model checking works. This tooling can be used to make either C or Rust safer. It can be used to augment proofs of theorems built in a proof assistant to extend this to implementation. The overhead of model checking is about that of unit testing, once you understand how to use it.
It is significantly less expensive to teach C developers how to model check their software using CBMC than it is to teach them Rust and then have them port code to Rust. Using CBMC properly, one can get better security guarantees than using vanilla Rust. Overall, an Ada + Spark, CBMC + C, Kani + Rust strategy coupled with constructive theory and proofs regarding overall architectural guarantees will yield equivalent safety and security. I'd trust such pairings of process and tooling -- regardless of language choice -- over any LLM derived solutions.
Sure it's possible in theory, but how many C codebases actually use formal verification? I don't think I've seen a single one. Git certainly doesn't do anything like that.
I have occasionally used CBMC for isolated functions, but that must already put me in the top 0.1% of formal verification users.
It's not used more because it is unknown, not because it is difficult to use or that it is impractical.
I've written several libraries and several services now that have 100% coverage via CBMC. I'm quite experienced with C development and with secure development, and reaching this point always finds a handful of potentially exploitable errors I would have missed. The development overhead of reaching this point is about the same as the overhead of getting to 80% unit test coverage using traditional test automation.
You're describing cases in which static analyzers/model checkers give up, and can't provide a definitive answer. To me this isn't side-stepping the undecidability problem, this is hitting the problem.
C's semantics create dead-ends for non-local reasoning about programs, so you get inconclusive/best-effort results propped up by heuristics. This is of course better than nothing, and still very useful for C, but it's weak and limited compared to the guarantees that safe Rust gives.
The bar set for Rust's static analysis and checks is to detect and prevent every UB in safe Rust code. If there's a false positive, people file it as a soundness bug or a CVE. If you can make Rust's libstd crash from safe Rust code, even if it requires deliberately invalid inputs, it's still a CVE for Rust. There is no comparable expectation of having anything reliably checkable in C. You can crash stdlib by feeding it invalid inputs, and it's not a CVE, just don't do that. Static analyzers are allowed to have false negatives, and it's normal.
You can get better guarantees for C if you restrict semantics of the language, add annotations/contracts for gaps in its type system, add assertions for things it can't check, and replace all the C code that the checker fails on with alternative idioms that fit the restricted model. But at that point it's not a silver bullet of "keep your C codebase, and just use a static analyzer", but it starts looking like a rewrite of C in a more restrictive dialect, and the more guarantees you want, the more code you need to annotate and adapt to the checks.
And this is basically Rust's approach. The unsafe Rust is pretty close to the semantics of C (with UB and all), but by default the code is restricted to a subset designed to be easy for static analysis to be able to guarantee it can't cause UB. Rust has a model checker for pointer aliasing and sharing of data across threads. It has a built-in static analyzer for memory management. It makes programmers specify contracts necessary for the analysis, and verifies that the declarations are logically consistent. It injects assertions for things it can't check at compile time, and gives an option to selectively bypass the checkers for code that doesn't fit their model. It also has a bunch of less rigorous static analyzers detecting certain patterns of logic errors, missing error handling, and flagging suspicious and unidiomatic code.
It would be amazing if C had a static analyzer that could reliably assure with a high level of certainty, out of the box, that a heavily multi-threaded complex code doesn't contain any UB, doesn't corrupt memory, and won't have use-after-free, even if the code is full of dynamic memory (de)allocations, callbacks, thread-locals, on-stack data of one thread shared with another, objects moved between threads, while mixing objects and code from multiple 3rd party libraries. Rust does that across millions lines of code, and it's not even a separate static analyzer with specially-written proofs, it's just how it works.
Such analysis requires code with sufficient annotations and restricted to design patterns that obviously conform to the checkable model. Rust had a luxury of having this from the start, and already has a whole ecosystem built around it.
C doesn't have that. You start from a much worse position (with mutable aliasing, const that barely does anything, and a type system without ownership or any thread safety information) and need to add checks and refactor code just to catch up to the baseline. And in the end, with all that effort, you end up with a C dialect peppered with macros, and merely fix one problem in C, without getting additional benefits of a modern language.
CBMC+C has a higher ceiling than vanilla Rust, and SMT solvers are more powerful, but the choice isn't limited to C+analyzers vs only plain Rust. You still can run additional checkers/solvers on top of everything Rust has built-in, and further proofs are easier thanks to being on top of stronger baseline guarantees and a stricter type system.
If we mark any case that might be undecidable as a failure case, and require that code be written that can be verified, then this is very much sidestepping undecidability by definition. Rust's borrow checker does the same exact thing. Write code that the borrow checker can't verify, and you'll get an error, even if it might be perfectly valid. That's by design, and it's absolutely a design meant to sidestep undecidability.
Yes, CBMC + C provides a higher ceiling. Coupling Kani with Rust results in the exact same ceiling as CBMC + C. Not a higher one. Kani compiles Rust to the same goto-C that CBMC compiles C to. Not a better one. The abstract model and theory that Kani provides is far more strict that what Rust provides with its borrow checker and static analysis. It's also more universal, which is why Kani works on both safe and unsafe Rust.
If you like Rust, great. Use it. But, at the point of coupling Kani and Rust, it's reaching safety parity with model checked C, and not surpassing it. That's fine. Similar safety parity can be reached with Ada + Spark, C++ and ESBMC, Java and JBMC, etc. There are many ways of reaching the same goal.
There's no need to pepper C with macros or to require a stronger type system with C to use CBMC and to get similar guarantees. Strong type systems do provide some structure -- and there's nothing wrong with using one -- but unless we are talking about building a dependent type system, such as what is provided with Lean 4, Coq, Agda, etc., it's not enough to add equivalent safety. A dependent type system also adds undecidability, requiring proofs and tactics to verify the types. That's great, but it's also a much more involved proposition than using a model checker. Rust's H-M type system, while certainly nice for what it is, is limited in what safety guarantees it can make. At that point, choosing a language with a stronger type system or not is a style choice. Arguably, it lets you organize software in a better way that would require manual work in other languages. Maybe this makes sense for your team, and maybe it doesn't. Plenty of people write software in Lisp, Python, Ruby, or similar languages with dynamic and duck typing. They can build highly organized and safe software. In fact, such software can be made safe, much as C can be made safe with the appropriate application of process and tooling.
I'm not defending C or attacking Rust here. I'm pointing out that model checking makes both safer than either can be on their own. As with my original reply, model checking is something different than static analysis, and it's something greater than what either vanilla C or vanilla Rust can provide on their own. Does safe vanilla Rust have better memory safety than vanilla C? Of course. Is it automatically safe against the two dozen other classes of attacks by default and without careful software development? No. Is it automatically safe against these attacks with model checking? Also no. However, we can use model checking to demonstrate the absence of entire classes of bugs -- each of these classes of bugs -- whether we model check software written in C or in Rust.
If I had to choose between model checking an existing codebase (git or the Linux kernel), or slowly rewriting it in another language, I'd choose the former every time. It provides, by far, the largest gain for the least amount of work.
In my experience current AI is still far from reasoning about the kind of hard-to-spot bugs in C that lead to the worst exploits. Rust solves most of these by design. It isn't about adding a second language - it is about slowly phasing out a language that is being misused in areas it shouldn't be in.
C will at some point be relegated to being an educational language, incredibly valuable due to few but good abstractions over assembly. It will continue to exist for decades in most systems, but hopefully it won't be used outside of the maintenance of legacy systems.