> Whether the language was designed for it does not matter, as long as it works. And it works well.

It matters fundamentally. "Works well" for research projects or limited AWS components is not equivalent to DO-178C Level A certification where mathematical proof is required. The verification community distinguishes between "we verified some properties of some code" and "the language guarantees these properties for all conforming code."

> Few of the tools can verify unsafe code is free of UB

With heavy annotation burden, for specific patterns. SPARK has no unsafe - the entire language is verifiable. That's the difference between "can be verified with effort" and "is verified by construction."

> You don't need to prove anything about [traits/lifetimes]: they're purely a type level thing

Trait resolution determines which code executes (monomorphization). Lifetimes affect drop order and program semantics. These aren't erased - they're compiled into the code being verified. SPARK's type system is verifiable; Rust's requires the verifier to trust the compiler's type checker.

> The code that procedural macros generate is visible to these tools

The macro logic is unverified Rust code executing at compile time. A bug in the macro generates incorrect code that may pass verification. SPARK has no equivalent escape hatch.

> some of the tools do support interior mutability (with extra annotations I believe)

Exactly - manual annotation burden. SPARK's verification is automatic for all conforming code. The percentage of manual proof effort is a critical metric in formal verification.

> That is not a problem - in fact most of them prove precisely that: that code does not panic

So they're doing what SPARK does automatically - proving absence of runtime errors. But SPARK guarantees this for the language; Rust tools must verify it per codebase.

> you can follow what rustc does (and in fact it's easier for these tools, since they integrate with rustc)

"Follow the compiler's behavior" is not formal semantics. Formal verification requires mathematical definitions independent of implementation. This is why SPARK has an ISO standard with formal semantics, not "watch what GNAT does."

> Rust is very well amenable to formal verification [...] Perhaps even more amenable than Ada

Then why doesn't it have DO-178C, EN 50128, or ISO 26262 certification toolchains after a decade? SPARK achieved this because verification was the design goal. Rust's design goals were different - and valid - but they weren't formal verifiability.

> SPARK also does not support major parts of Ada

Correct - by design. SPARK excludes features incompatible with efficient verification (unrestricted pointers, exceptions in contracts, controlled types). This is intentional subsetting for verification. Claiming Rust tools "support more of Rust" ignores that some Rust features are fundamentally unverifiable without massive annotation burden.

The core issue: you're comparing research tools that can verify some properties of some Rust programs with significant manual effort, to a language designed so that conforming programs are automatically verifiable with mathematical guarantees. These are different categories of assurance.

I could be wrong, but the Ferrocene Language Specification, FLS, adopted by Rust as its specification, may be severely incomplete.

For instance, it describes atomics, but I am not sure that it defines Rust's memory model or the semantics of atomics anywhere.

This Ferrocene page looks very short, and has a lot of links to Rust API documentation.

https://public-docs.ferrocene.dev/main/specification/concurr...

Conversely, the Rustonomicon has this page that says that Rust just uses C++20's memory model for its atomics. Yet I do not believe that memory model is defined anywhere in FLS.

https://doc.rust-lang.org/nomicon/atomics.html

I do not know if FLS defines unwinding of panics anywhere either.

Is FLS severely incomplete regarding rustc and Rust, despite being adopted by Rust as its specification? It almost seems fake.

It is certainly incomplete. Virtually all specifications for programming languages are. It is good enough for safety critical work, which is a higher bar than most.

The idea is that you adopt them and improve them over time. It is more complete than the reference, which is the previous project in this area.

Rust does have a qualified compiler: Ferrocene. It supports ISO 26262 (ASIL D), IEC 61508 (SIL 4) and IEC 62304 currently, with more on the way, including plans for DO-178 I believe. It’s being driven by actual adoption, so they’ve started in automotive and will move to other industries eventually.

Do you happen to know how Ferrocene relates to AdaCore's Rust compiler?

https://www.adacore.com/press/adacore-announces-the-first-qu...

> ISO 26262 (ASIL D)

Isn't that only for a very small subset of Rust and its standard library?

Also, do you happen to be able to explain this comment?

https://reddit.com/r/rust/comments/1nhk30y/ferrous_systems_j...

> I think it's important to note, the certification is only for a subset of the run-time, which means some language features will not be available. Also, the certification is only to SIL-2 level, so any projects requiring SIL-3 or 4 will still not be able to use the Ferrocine compiler!

I know that Ferrocene and AdaCore were working together, but then parted ways. I am assuming they're both doing roughly the same thing: qualifying the upstream compiler with some patches. I know that Ferrocene's patches are mostly adding a new platform, they've upstreamed all the other stuff.

> Isn't that only for a very small subset of Rust and its standard library?

It is currently for the compiler only. This ties into the next bit, though:

> Also, do you happen to be able to explain this comment?

Yeah, you can see me posting in that thread, though not that specific sub-thread. Rust has a layered standard library: core, alloc (which layers memory allocation on top of core), and std (which layers OS specific things on top of that). There's three parts to this comment:

First, because it's only core, you don't get the stuff from alloc and std. So, no dynamic memory allocation or OS specific things like filesystem access. That's usually not a problem for these kinds of projects. But that's what they mean by 'some language features will not be available', but they're mistaken: all language features are available, it's some standard library features that are not. No language features require allocation, for example.

Second, they qualified a subset of libcore for IEC61508. A founder of Ferrous mentions that IS 26262 is coming next, they just had a customer that needed IEC61508 quickly, so they prioritized that. This is how it relates to the above, for ISO 26262, it's just the compiler currently.

I mentioned Ferrocene here: https://news.ycombinator.com/item?id=45494263.

> It matters fundamentally. "Works well" for research projects or limited AWS components is not equivalent to DO-178C Level A certification where mathematical proof is required

As said in a sibling comment, certification to Rust starts to appear. Rust is young and its usage in regulated industries is just barely beginning. Ada and SPARK are old and mature. It's not a fair comparison - but that doesn't mean Rust couldn't get there.

> > Few of the tools can verify unsafe code is free of UB > > With heavy annotation burden, for specific patterns

> > some of the tools do support interior mutability (with extra annotations I believe) > > Exactly - manual annotation burden.

SPARK does not support the equivalent (shared mutable pointers) at all. Rust verifies do with a heavy annotation burden. What's better?

> Trait resolution determines which code executes (monomorphization). Lifetimes affect drop order and program semantics. These aren't erased - they're compiled into the code being verified. SPARK's type system is verifiable; Rust's requires the verifier to trust the compiler's type checker.

Has the Ada compiler formally verified? No. So you're trusting the Ada type checker just as well.

The Ada specification was, if I understand correctly, formally defined. But there are efforts to do that to Rust as well (MiniRust, a-mir-formality, and in the past RustBelt).

> The macro logic is unverified Rust code executing at compile time. A bug in the macro generates incorrect code that may pass verification. SPARK has no equivalent escape hatch.

And if you have a script that generates some boilerplate code into your Ada project, is the script logic verified? The outputted code is, and that's what important. Even with full formal verification, proving that the program fulfills its goals, you don't need to verify helpers like this - only the code they generate. If it works, then even if the script is buggy, who cares.

> So they're doing what SPARK does automatically - proving absence of runtime errors

Exactly - that's the point, to prove free of runtime errors.

I'm not sure what you mean by "SPARK guarantees this for the language; Rust tools must verify it per codebase" - does SPARK not need to verify separate codebases separately? Does it somehow work magically for all of your codebases at once?

It's clear at this point that neither of us will get convinced, and I think I said everything I had about this already.

Have fun developing with Ada!

rustc and Rust have some rather serious type system holes and other kinds of issues.

https://github.com/Speykious/cve-rs

https://github.com/lcnr/solver-woes/issues

That first one is an implementation bug that's never been discovered in the wild.

Regardless, all projects have bugs. It's not really germane to qualification, other than that qualification assumes that software has bugs and that you need to, well, qualify them and their impact.

> Certification to Rust starts to appear. Rust is young and its usage in regulated industries is just barely beginning.

Ferrocene has ISO 26262 qualification for the compiler, not verification tools. That's compiler correctness, not formal verification of programs. SPARK's GNATprove is qualified for proving program properties - fundamentally different.

> SPARK does not support the equivalent (shared mutable pointers) at all. Rust verifies do with a heavy annotation burden. What's better?

SPARK supports controlled aliasing through ownership aspects and borrow/observ annotations - but these are designed into the verification framework, not bolted on. The key difference: SPARK's aliasing controls are part of the formal semantics and verified by GNATprove automatically. Rust's unsafe shared mutability requires external verification tools with manual proof burden. SPARK deliberately restricts aliasing patterns to those that remain efficiently verifiable: it's not "can't do it" it's "only allow patterns we can verify".

> Has the Ada compiler formally verified? No. So you're trusting the Ada type checker just as well.

Qualified compilers (GNAT Pro) undergo qualification per DO-178C/ED-12C. The difference: SPARK's semantics are formally defined independent of the compiler. Rust verification tools must trust rustc's implementation because Rust has no formal specification. When rustc changes behavior (happens frequently), verification tools break. SPARK's specification is stable.

> And if you have a script that generates some boilerplate code into your Ada project, is the script logic verified?

External build scripts are different from language features. Procedural macros are part of Rust's language definition and can access compiler internals. If you use external code generation with SPARK, you're explicitly stepping outside the language's guarantees - and safety standards require justification. Rust embeds this escape hatch in the language itself.

> I'm not sure what you mean by "SPARK guarantees this for the language; Rust tools must verify it per codebase"

SPARK: If your code compiles in the SPARK subset, overflow/division-by-zero/array bounds are automatically proven impossible by language rules. You can add contracts for functional correctness.

Rust tools: You must annotate code, write invariants, and run verification per program. The language provides memory safety via the type system, but not freedom from arithmetic errors or functional correctness. These must be proven per codebase with tools.

The distinction: language-level guarantees vs. per-program verification effort.

> It's clear at this point that neither of us will get convinced

Fair enough, but the fundamental difference is whether verification is a language property or a tool property. Both approaches have merit for different use cases.