Is asserting the assumptions during code execution not standard practice for formally verified code?

By "asserting X" do you mean "checking whether X is true and crashing the program if not", like the assert macro in C or the assert statement in Python? No, that is almost never done, for three reasons:

• Crashing the program is often what you formally verified the program to prevent in the first place! A crashing program is what destroyed Ariane 5 on its maiden flight, for example. Crashing the program is often the worst possible outcome rather than an acceptable one.

• Many of the assumptions are not things that a program can check are true. Examples from the post include "nothing is concurrently modifying [variables]", "the compiler worked correctly, the hardware isn't faulty, and the OS doesn't mess with things," and, "unsafe [Rust] code does not have [a memory bug] either." None of these assumptions could be reliably verified by any conceivable test a program could make.

• Even when the program could check an assumption, it often isn't computationally feasible; for example, binary search of an array is only valid if the array is sorted, but checking that every time the binary search routine is invoked would take it from logarithmic time to linear time, typically an orders-of-magnitude slowdown that would defeat the purpose of using a binary search instead of a simpler sequential search. (I think Hillel tried to use this example in the article but accidentally wrote "binary sort" instead, which isn't a thing.)

When crashing the program is acceptable and correctness preconditions can be efficiently checked, postconditions usually can be too. In those cases, it's common to use either runtime checks or property-based testing instead of formal verification, which is harder.

This becomes an interesting conversation then. First of all, it could mean "checking whether X is true and logging an error" instead of exiting the program.

- But if you aren't comfortable crashing the program if the assumptions are violated, then what is your formal verification worth? Not much, because the formal verification only holds if the assumptions hold, and you are indicating you don't believe they will hold.

- True, some are infeasible to check. In that case, you could then check them weakly or indirectly. For example, check if the first two indices of the input array are not sorted. You could also check them infrequently. Better to partially check your assumptions than not check at all.

You don’t assume the assertions, the verification shows they always hold!

You assume the premises. The verification shows that the conclusions hold (assuming the premises do). Both premises and conclusions are, in some sense, "assertions", though not the C assert sense.

You didn't answer my question!

I mean to ask both: "checking whether X is true and crashing the program if not", like the assert macro, OR assert as in a weaker check that does not crash the program (such as generate a log event).

When crashing the program is acceptable and correctness preconditions can be efficiently checked, postconditions usually can be too.

What's interesting to me is the combination of two claims: formal verification is used when crashes are not acceptable, and crashing when formal assumptions are violated is therefore not acceptable. This makes sense on the surface - but the program is only proven crashproof when the formal assumptions hold. That is all formal verification proves.

It's more that adding intentional conditional crashes to the program in situations where crashing is the worst possible outcome can't possibly make the situation better. It might not make it worse, if the crashes never happen.

As for log messages, yeah, people do commonly put log messages in their software for when things like internal consistency checks fail.

I believe this is a false dichotomy. It's interesting but it also demands no third way exist.

For example I have run services which existed inside a forever() loop. They exit, and are forceably restarted. Is that viable for a flight control system? No. Does it allow me to meet a low bounds availability problem with OOM killers? Yes, until the size of a quiescent from-boot system causes OOM.

BGP speakers who compile in runtime caps on the prefix count can be entirely stable and run BGP "correctly" right up to the point somebody sends them more than the cap in prefixes. That can take hours to emerge. I lived in that world for a while too.

> "checking whether X is true and logging an error"

You now have some process or computation which is in an unknown state. Presumably it was important! There are lots of cases where runtime errors are fine and expected -- when processing input from the outside world, but if you started a chain of computation with good input and somehow ended up with bad input, then you have a bug! That is bad! Everything after that point can no longer be trusted, and there was some place where the code went wrong before that and made everything between there and where you caught the error invalid. And that has possibly poisoned your whole system.

That’s impractical. Take binary search and the assumption the list is sorted. Verifying the list is sorted would negate the point of binary search as you would be inspecting every item in the list.

There is a useful middle ground here. When picking the middle element, verify that it is indeed within the established bounds. This way, you'll still catch the sort order violations that matter without making the whole search linear.

ASSERTING the list is sorted as an assumption is significantly different form VERIFYING that the list is sorted before executing the search. Moreover, type systems can track that a list was previously sorted and maintained it's sorted status making the assumption reasonable to state.

What do you mean when you say "assert" and "verify"? In my head, given the context of this thread and the comment you're replying to, they can both only mean "add an `if not sorted then abort()`."

But you make some sort of distinction here.

Verify means you check. Assert means you say it is, but might or might not check.

I know that Solaris (or at least, ZFS) has VERIFY and ASSERT macros where the ASSERT macros are compiled out in production builds. Is that the kind of thing you're referring to?

You can aslo mark certain codepaths as unreachable to hint to the compiler that it can make certain optimisations (e.g., "this argument is never negative"), but if you aren't validating that the assumption is correct I wouldn't call that an assertion -- though a plain reading of your comment would imply you would still call this an "assertion"? AFAIK, no language calls this construct "assert".

This is probably one of those "depends on where you first learned it" bits of nomenclature, but to me the distinction here is between debug assertions (compiled out in production code) and assertions (always run).

This thread started with:

> Is asserting the assumptions during code execution not standard practice for formally verified code?

Are you using the same definition of "assert" as that post does?

I'm not clear what definition of assert anyone is using. Thus I'm trying to create a new one that I think is useful (in the context of this type of discussion only!).

Verify means you checked.

Assert means you are suggesting something is true, but might or might not have checked. Sometimes an assert is "too hard" to verify but you have reason to think it is true. This could be because of low level code, or just that it is possible to verify but would cost too much CPU (runtime, or possibly limits of our ability to prove large systems) Sometimes assert is like a Mafia boss (It is true or I'll shoot - it might or might not really be true but nobody is going to argue the point now. This can sometimes be needed to keep a discussion on a more important topic despite the image)

assert in most languages is a boolean that crashes the program if it is false.

If you want to assert that a list is sorted, you need some function that checks if it is sorted and returns a boolean.

In many (most?) languages assert is an optional crash if false. The language can choose to run the check or not. A function to check if a list is sorted and return a boolean is not hard to write - but of course you then need to prove that function is correct.

> Moreover, type systems can track that a list was previously sorted and maintained it's sorted status making the assumption reasonable to state.

This is true, but if you care about correct execution, you would need to re-verify that the list is sorted - bitflips in your DRAM or a buggy piece of code trampling random memory could have de-sorted the list. Then your formally verified application misbehaves even though nothing is wrong with it.

It's also possible to end up with a "sorted" list that isn't actually sorted if your comparison function is buggy, though hopefully you formally verified the comparison function and it's correct.

You already have hardware level bit flip verification, you don't need to recheck the list

That only works up to some level of bit flips, like all error correcting codes. It works for our maybe even two bit flips, but not more than that.

Only if you have ECC RAM, and even then it’s not perfect.

Only if you verify it for every search. If you haven't touched the list since the last search, the verification is still good. For some (not all) situations, you can verify the list at the start of the program, and never have to verify it again.

How would that look like if you accidentally assumed you have arbitrary large integers but in practice you have 64 bits?

    Add(x,y):
       Assert( x >= 0 && y>= 0 )
        z = x + y
        Assert( z >= x && z >= y )
        return z

There’s definitely smarter ways to do this, but in practice there is always some way to encode the properties you care about in ways that your assertions will be violated. If you can’t observe a violation, it’s not a violation https://en.wikipedia.org/wiki/Identity_of_indiscernibles
[deleted]

In some languages overflow is asserted as a can't happen and so the optimizer will remove your checks

Best I can tell is that overflow is undefined behavior for signed ints in C/C++ so -O3 with gcc might remove a check that could only be true if UB occurred.

The compound predicate in my example above coupled with the fact that the compiler doesn’t reason about the precondition in the prior assert (y is non-negative) means this specific example wouldn’t be optimized away, but bluGill does have a point.

An example of an assert that might be optimized away:

    int addFive(int x) {
        int y = x + 5;
        assert(y >= x);
        return y;
    }

Yes, you can not meaningfully assert anything after UB in C/C++. But you can let the compiler add the trap for overflow -fsanitize=signed-integer-overflow -sanitize-trap=all, or you could also write your assertion in a way where it does not rely on the result (e.g. ckd_add), or you use "volatile" to write in a way the compiler is not allowed to assume anything.

Clang is a bit smarter than GCC here (for some definition of 'smart') and does optimize the original version:

https://gcc.godbolt.org/z/3Y4aheG6x

Care to share a language where the compiler infers the semantic meaning of asserts and optimizes them away? I’ve never heard of this optimization.

C. This is a great thread: https://mastodon.social/@regehr/113821964763012870

(That was one of my texts at uni)

Signed overflow is UB in C/C++ and several compilers will skip explicit overflow checks as a result. See: https://godbolt.org/z/WehcWj3G5

C and C++