This is the smoking gun for me:

    procedure Overflow_This_Buffer
       with SPARK_Mode => On
    is
       type Integer_Array is array (Positive range <>) of Integer;
       Int_Array : Integer_Array (1 .. 10) := [others => 1];
       Index_To_Clear : Integer;
    begin
       Ada.Text_IO.Put ("What array index should be cleared? ");
       --  Read the new array size from stdin.
       Ada.Integer_Text_IO.Get (Index_To_Clear);

       Int_Array (Index_To_Clear) := 0;
    end Overflow_This_Buffer;

What I am asking is what exactly is formally proving from 1st principles? Because to me this looks no different than a simple assertion statement or runtime bounds check which can be performed in any language with if statements & exceptions.

How did you prove that no one could just put in an arbitrary integer into that function? Does it fail to compile? Then you're just making a custom type which enforces valid state which, again can be done in any language with a rich type system and if statements.

If I might be more blunt: formal proofs in the field of programming appear to be nothing more than mathematicians trying to strong-arm their way into the industry where they aren't necessarily needed. Formal proofs make sense when you're trying to come up with new formulas or studying the cosmos, but they make no sense when you're trying to predict how bits will behave on a computer, because for the most part that is a solved problem by languages themselves.

> How did you prove that no one could just put in an arbitrary integer into that function? Does it fail to compile?

The answer quite literally follows immediately after the code sample you quoted:

> Attempting to prove the absence of runtime errors gives us the following warnings:

  buffer_overflow.adb:162:26: medium: unexpected exception might be raised
    162 |      Ada.Integer_Text_IO.Get (Index_To_Clear);
        |      ~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~

  buffer_overflow.adb:164:18: medium: array index check might fail
    164 |      Int_Array (Index_To_Clear) := 0;
        |                 ^~~~~~~~~~~~~~
    reason for check: value must be a valid index into the array
    possible fix: postcondition of call at line 162 should mention Item 
    (for argument Index_To_Clear)
    162 |      Ada.Integer_Text_IO.Get (Index_To_Clear);
        |                         ^ here
> The SPARK prover correctly notices that there's nothing stopping us from entering a value outside the array bounds. It also points out that the Get call we're using to read the integer from stdin can raise an unexpected Constraint_Error at runtime if you type in anything that can't be parsed as an integer.

This is followed by a version of the program which SPARK accepts.

> Then you're just making a custom type which enforces valid state which, again can be done in any language with a rich type system and if statements.

Key word here is can. You can write correct software using any language, from raw binary to theorem proving languages, but some languages check more properties at compile time than others. What using something like SPARK/Lean/Agda/Rocq buys you is the "more" part, up to allowing you to eliminate runtime checks entirely because you proved at compile time that the corresponding error conditions can not happen (e.g., proving that indices are always in bounds allows you to omit bounds checks). That's not something the more popular languages are capable of because their type system is not expressive enough.

> Formal proofs make sense when you're trying to come up with new formulas or studying the cosmos, but they make no sense when you're trying to predict how bits will behave on a computer, because for the most part that is a solved problem by languages themselves.

This seems contradictory, especially when considering that type checking is creating a formal proof?

> The SPARK prover correctly notices that there's nothing stopping us from entering a value outside the array bounds. It also points out that the Get call we're using to read the integer from stdin can raise an unexpected Constraint_Error at runtime if you type in anything that can't be parsed as an integer.

To me, this doesn't sound like something unique to spark. Let's return to the solution example:

    procedure Overflow_This_Buffer
       with SPARK_Mode => On
    is
       type Integer_Array is array (Positive range <>) of Integer;
       Int_Array : Integer_Array (1 .. 10) := [others => 1];
       Index_To_Clear : Integer := Int_Array'First - 1;
    begin
       while Index_To_Clear not in Int_Array'Range loop
          Ada.Text_IO.Put ("What array index should be cleared? ");
          --  Read the new array size from stdin.
          Ada.Integer_Text_IO.Get (Index_To_Clear);
       end loop;

       Int_Array (Index_To_Clear) := 0;
    end Overflow_This_Buffer;
All you have done here is proven that within the vacuum of this function that the index will be a valid one. Indeed, this is even confirmed by the article author just prior:

> If we wrap the Get call in a loop, and poll the user continuously until we have a value within the array bounds, SPARK can actually prove that a buffer overflow can't occur. (Remember to initialise the Index_To_Clear variable to something outside this range!)

You have to poll the user continuously until a valid input is given. Again, I don't see how this is any different or special compared to any other programming language that would handle the error by asking the user to retry the function.

> Key word here is can. You can write correct software using any language, from raw binary to theorem proving languages, but some languages check more properties at compile time than others. What using something like SPARK/Lean/Agda/Rocq buys you is the "more" part, up to allowing you to eliminate runtime checks entirely because you proved at compile time that the corresponding error conditions can not happen (e.g., proving that indices are always in bounds allows you to omit bounds checks). That's not something the more popular languages are capable of because their type system is not expressive enough.

But I think the discussion here is about Rust, Ada, C/C++ no? The type system in Rust, Ada, and C++ are all robust enough at least to accomplish this.

> This seems contradictory, especially when considering that type checking is creating a formal proof?

Exactly! This is the root of my issue: you can accomplish this magical "proof" you speak of without using SPARK or Ada or any special "contract" language.

> Again, I don't see how this is any different or special compared to any other programming language that would handle the error by asking the user to retry the function.

The blog's emphasis is that SPARK catches the possible error at compile time, so you can't forget/neglect to handle the error. Notice that neither rustc nor clippy complain at compile time about the potential OOB access in the Rust program that precedes the SPARK demo, while SPARK catches the potential issue in the naive translation.

> But I think the discussion here is about Rust, Ada, C/C++ no?

Sure, but my point is that type systems are not all equally capable. I don't think it's controversial at all that Rust's type system is capable of proving things at compile time that C++'s type system cannot, the most obvious examples being memory safety and data race safety. Likewise, SPARK (and C++, for that matter) is capable of things that Rust is not.

> The type system in Rust, Ada, and C++ are all robust enough at least to accomplish this.

I'm not sure about that? You might be able to approach what SPARK could do, but at least off the top of my head you'll need to make a check the compiler can't verify somewhere.

And that's just for this instance; I don't think Rust nor C++ are able to match SPARK's more general capabilities. For example, I believe neither Rust nor C++ are able to express "this will not panic/throw" in their type systems. There are tricks for the former that approximate the functionality (e.g., dtolnay/no_panic [0]), but those are not part of the type system and have limitations (e.g., does not work with panic=abort). The latter has `noexcept`, but that's basically semantically a try { } catch (...) { std::terminate(); } around the corresponding function and it certainly won't stop you from throwing or calling a throwing function anyways.

> Exactly! This is the root of my issue: you can accomplish this magical "proof" you speak of without using SPARK or Ada or any special "contract" language.

Perhaps I wasn't clear enough. I was trying to say that it's contradictory to complain about formal proofs while also claiming you can accomplish the same using other programming languages' type systems because those type systems are formal proofs! It's like saying "You don't need formal proofs; just use this other kind of formal proofs".

I don't think that assertion is correct either. The reason separate languages are used for this kind of proof is because they restrict operations that can't be proven, add constructs that provide additional information needed for the proof, or both. Perhaps as an extreme example, consider trying to make compile-time proofs for a "traditional" dynamically-typed language - Smalltalk, pre-type-hints Python, Lua, etc.

[0]: https://github.com/dtolnay/no-panic

Maybe you'll find this example to be a bit more useful: https://blog.adacore.com/i-cant-believe-that-i-can-prove-tha...

The idea is that you define a number of pre- and post- condition predicates for a function that you want proved (in what's effectively the header file of your Ada program). Like with tests, these checks that show that the output is correct are often shorter than the function body, as in this sorting example.

Then you implement your function body and the prover attempts to verify that your post-conditions hold given the pre-conditions. Along the way it tries to check other stuff like overflows, whether the pre- and post- conditions of the routines called inside are satisfied, etc. So you can use the prover to try to ensure in compile-time that any properties that you care about in your program are satisfied that you may otherwise catch in run-time via assertions.