What are you hoping it will achieve?

The internet went down because cloudflare used a bad config... a config parsed by a rust app.

One of these days the witch hunt against C will go away.

A service going down is a million times better than being exploited by an attacker. If this is a witch hunt then C is an actual witch.

Why can it be exploited? I’ve configured my OS so my process is isolated to the resources it needs.

What language is your OS written in?

It’s written in C I’m glad you asked. Do you have any exploits in the Linux process encapsulation to share?

Surely your not suggesting that the Rust compiler never produces exploitable code?

I probably don’t have such an exploit, since you’re probably running something up to date. There have been many in the past. I doubt the last one to be fixed is the last one to exist.

If your attitude is that getting exploited doesn’t matter because your software is unprivileged, you need some part of your stack to be unexploitable. That’s a tall order if everything is C.

You can get exploitable code out of any compiler. But you’re far more likely to get it from real-world C than real-world Rust.

> you need some part of your stack to be unexploitable.

Kernel level process isolation is extremely robust.

> If your attitude is that getting exploited doesn’t matter because your software is unprivileged

It’s not that exploits doesn’t matter. It’s that process architecture is a stronger form of guarantee than anything provided by a language runtime.

I agree that the place where rust is most beneficial is for programs that must be privileged and that are likely to face attack - such as a web server.

But the idea that you can’t securely use a C program in your stack or that rust magically makes process isolation irrelevant is incorrect.

How can process architecture be a stronger guarantee than anything provided by a language runtime when it is enforced by software written in a language?

You have a process receiving untrusted, potentially malicious input from the outside. If there’s an exploit then an attacker can potentially take control of the process. Your process is isolated, that’s good. But it can still communicate with other parts of your system. It can make syscalls. Now you’re in the same situation where you have a program receiving untrusted, potentially malicious input from the outside, but now “the outside” is your subverted process, and “a program” is the kernel. The same factors that make your program difficult to secure from exploits if it’s written in C also apply to the kernel.

I’m not sure where those ideas as the end of your comment came from. I certainly didn’t say them.

> How can process architecture be a stronger guarantee than anything provided by a language runtime when it is enforced by software written in a language?

Please learn more about this topic. You don't understand OS security models.

The internet didn't go down and you're mischaracterizing it as a parsing issue when the list would've exceeded memory allocation limits. They didn't hardcode a fallback config for that case. What memory safety promise did Rust fail there exactly?

I think the point is memory bugs are only one (small) subset of bugs.

The conventional wisdom is ~70% of serious security bugs are memory safety issues.

https://www.cisa.gov/sites/default/files/2023-12/CSAC_TAC_Re...

Security bugs - and not bad security processes, are a small subset of bugs.

A panic in Rust is easier to diagnose and fix than some error or grabage data that was caused by an out of bounds access in some random place in the call stack