> It matters fundamentally. "Works well" for research projects or limited AWS components is not equivalent to DO-178C Level A certification where mathematical proof is required

As said in a sibling comment, certification to Rust starts to appear. Rust is young and its usage in regulated industries is just barely beginning. Ada and SPARK are old and mature. It's not a fair comparison - but that doesn't mean Rust couldn't get there.

> > Few of the tools can verify unsafe code is free of UB > > With heavy annotation burden, for specific patterns

> > some of the tools do support interior mutability (with extra annotations I believe) > > Exactly - manual annotation burden.

SPARK does not support the equivalent (shared mutable pointers) at all. Rust verifies do with a heavy annotation burden. What's better?

> Trait resolution determines which code executes (monomorphization). Lifetimes affect drop order and program semantics. These aren't erased - they're compiled into the code being verified. SPARK's type system is verifiable; Rust's requires the verifier to trust the compiler's type checker.

Has the Ada compiler formally verified? No. So you're trusting the Ada type checker just as well.

The Ada specification was, if I understand correctly, formally defined. But there are efforts to do that to Rust as well (MiniRust, a-mir-formality, and in the past RustBelt).

> The macro logic is unverified Rust code executing at compile time. A bug in the macro generates incorrect code that may pass verification. SPARK has no equivalent escape hatch.

And if you have a script that generates some boilerplate code into your Ada project, is the script logic verified? The outputted code is, and that's what important. Even with full formal verification, proving that the program fulfills its goals, you don't need to verify helpers like this - only the code they generate. If it works, then even if the script is buggy, who cares.

> So they're doing what SPARK does automatically - proving absence of runtime errors

Exactly - that's the point, to prove free of runtime errors.

I'm not sure what you mean by "SPARK guarantees this for the language; Rust tools must verify it per codebase" - does SPARK not need to verify separate codebases separately? Does it somehow work magically for all of your codebases at once?

It's clear at this point that neither of us will get convinced, and I think I said everything I had about this already.

Have fun developing with Ada!

rustc and Rust have some rather serious type system holes and other kinds of issues.

https://github.com/Speykious/cve-rs

https://github.com/lcnr/solver-woes/issues

That first one is an implementation bug that's never been discovered in the wild.

Regardless, all projects have bugs. It's not really germane to qualification, other than that qualification assumes that software has bugs and that you need to, well, qualify them and their impact.

> Certification to Rust starts to appear. Rust is young and its usage in regulated industries is just barely beginning.

Ferrocene has ISO 26262 qualification for the compiler, not verification tools. That's compiler correctness, not formal verification of programs. SPARK's GNATprove is qualified for proving program properties - fundamentally different.

> SPARK does not support the equivalent (shared mutable pointers) at all. Rust verifies do with a heavy annotation burden. What's better?

SPARK supports controlled aliasing through ownership aspects and borrow/observ annotations - but these are designed into the verification framework, not bolted on. The key difference: SPARK's aliasing controls are part of the formal semantics and verified by GNATprove automatically. Rust's unsafe shared mutability requires external verification tools with manual proof burden. SPARK deliberately restricts aliasing patterns to those that remain efficiently verifiable: it's not "can't do it" it's "only allow patterns we can verify".

> Has the Ada compiler formally verified? No. So you're trusting the Ada type checker just as well.

Qualified compilers (GNAT Pro) undergo qualification per DO-178C/ED-12C. The difference: SPARK's semantics are formally defined independent of the compiler. Rust verification tools must trust rustc's implementation because Rust has no formal specification. When rustc changes behavior (happens frequently), verification tools break. SPARK's specification is stable.

> And if you have a script that generates some boilerplate code into your Ada project, is the script logic verified?

External build scripts are different from language features. Procedural macros are part of Rust's language definition and can access compiler internals. If you use external code generation with SPARK, you're explicitly stepping outside the language's guarantees - and safety standards require justification. Rust embeds this escape hatch in the language itself.

> I'm not sure what you mean by "SPARK guarantees this for the language; Rust tools must verify it per codebase"

SPARK: If your code compiles in the SPARK subset, overflow/division-by-zero/array bounds are automatically proven impossible by language rules. You can add contracts for functional correctness.

Rust tools: You must annotate code, write invariants, and run verification per program. The language provides memory safety via the type system, but not freedom from arithmetic errors or functional correctness. These must be proven per codebase with tools.

The distinction: language-level guarantees vs. per-program verification effort.

> It's clear at this point that neither of us will get convinced

Fair enough, but the fundamental difference is whether verification is a language property or a tool property. Both approaches have merit for different use cases.