This becomes an interesting conversation then. First of all, it could mean "checking whether X is true and logging an error" instead of exiting the program.
- But if you aren't comfortable crashing the program if the assumptions are violated, then what is your formal verification worth? Not much, because the formal verification only holds if the assumptions hold, and you are indicating you don't believe they will hold.
- True, some are infeasible to check. In that case, you could then check them weakly or indirectly. For example, check if the first two indices of the input array are not sorted. You could also check them infrequently. Better to partially check your assumptions than not check at all.
You don’t assume the assertions, the verification shows they always hold!
You assume the premises. The verification shows that the conclusions hold (assuming the premises do). Both premises and conclusions are, in some sense, "assertions", though not the C assert sense.
You didn't answer my question!
I mean to ask both: "checking whether X is true and crashing the program if not", like the assert macro, OR assert as in a weaker check that does not crash the program (such as generate a log event).
When crashing the program is acceptable and correctness preconditions can be efficiently checked, postconditions usually can be too.
What's interesting to me is the combination of two claims: formal verification is used when crashes are not acceptable, and crashing when formal assumptions are violated is therefore not acceptable. This makes sense on the surface - but the program is only proven crashproof when the formal assumptions hold. That is all formal verification proves.
It's more that adding intentional conditional crashes to the program in situations where crashing is the worst possible outcome can't possibly make the situation better. It might not make it worse, if the crashes never happen.
As for log messages, yeah, people do commonly put log messages in their software for when things like internal consistency checks fail.
I believe this is a false dichotomy. It's interesting but it also demands no third way exist.
For example I have run services which existed inside a forever() loop. They exit, and are forceably restarted. Is that viable for a flight control system? No. Does it allow me to meet a low bounds availability problem with OOM killers? Yes, until the size of a quiescent from-boot system causes OOM.
BGP speakers who compile in runtime caps on the prefix count can be entirely stable and run BGP "correctly" right up to the point somebody sends them more than the cap in prefixes. That can take hours to emerge. I lived in that world for a while too.
> "checking whether X is true and logging an error"
You now have some process or computation which is in an unknown state. Presumably it was important! There are lots of cases where runtime errors are fine and expected -- when processing input from the outside world, but if you started a chain of computation with good input and somehow ended up with bad input, then you have a bug! That is bad! Everything after that point can no longer be trusted, and there was some place where the code went wrong before that and made everything between there and where you caught the error invalid. And that has possibly poisoned your whole system.