Let's say you want to call win32 (or Mac) OS functions, all of a sudden you're doing all kinds of wonky pointer stuff because that's how these operating systems have been architected. Doing unsafe stuff is pretty inevitable if you want to do anything non-hello-world-ish.

> Doing unsafe stuff is pretty inevitable if you want to do anything non-hello-world-ish.

So the vast majority of Rust projects involve writing at least one unsafe block? Is that really your claim?

And even if you do end up writing an unsafe block, that should be a massive flag that the code in said block should deserve extra comments on why it is safe, and extra unit tests on verifying that it does not blow up.

How do you know the unsafe operation is safe? What are the preconditions the code block has? Write it down, review it, test it.

Exactly; I feel like a lot of people seem to misunderstand what Rust is trying to solve. It's fundamentally not trying to make unsafe code impossible; it's making the number of places you need to audit it a tiny fraction of your codebase compared to needing to audit the entirety of a C or C++ codebase. When I'm doing code reviews, you'd better believe I'm going to spend some extra time on any unsafe block I see to figure out if it's necessary and if so, if it's actually safe safe (with the default assumption for both of those being that they're not until I can convince myself otherwise).

The thing is you can actually write quite good C code (see OpenBSD project). The power of C is that it's pragmatic. It lets you write code with you taking the full responsibility of being a responsible person. To err is human, but we developed a set of practices to handle this (by making sure the gun is unloaded and the safety is on before storing it to avoid putting holes in feet).

I like type checking and other compile time checks, but sometimes they feel very ceremonial. And all of them are inference based, so they still relies on the axiom being right and that the chain of rules is not broken somewhere. And in the end they are annotations, not the runtime algorithm.

> To err is human

Yes, which is precisely why I write in Rust, because the compiler errs less than I do.

It may, but it still requires careful annotations. So you should hope that you have not made an error there and described the wrong structure for the code.

It seems like you have this backwards. Messing up lifetimes in safe Rust can't cause unsafety; the compiler checks if the lifetimes are valid, and if they're not, you get a compiler error. You don't need to "hope" you did it right because the entire point is that you can't compile if you didn't.

On the other hand, when you're relying on your ability to "actually write quite good C code"...you'd better hope that you have not made an error there. In practice, some of the most widely used C libraries in the world still seem to have bugs like this, so I don't really understand why you'd think that's a winning strategy.

Making use of win32 functions doesn't turn off bounds checking in your rust code.

A tiny fraction of programs need to use win32 or Mac OS functions beyond the standard library or other safe wrappers for said functions.

And even in those programs, only a fraction of the code in them is actually directly making calls to those APIs! Having everything else in safe code still makes it easier to audit than if the entire codebase is in C or C++.

So what? Just because you used the keyword `unsafe` to call an unsafe API does not mean that you are going to use unsafe pointer access to write to a vector.