I'll probably get a lot of hate mail for this but here goes nothing... Despite what many people like to claim, you cannot prove that a program has no bugs. That means proving the absence of bugs, and you cannot prove a negative. The best thing you can do is fail to find a bug, but that doesn't mean it isn't there.
Before everyone starts blabbing about formal verification, etc., consider this: how do you know that you didn't make a mistake in your formal verification? IOW, how do you know your formal verification is bug-free? Answer: you don't. Or if you try to formally verify your formal verification then you're just translating the problem to a new layer. It's just a chain of proofs that is always ultimately based on an unproven one, which invalidates the whole chain.
You can get asymptotically close to zero-bug proof, but you can never get there 100% of the way.
Formal methods practitioner here.
> That means proving the absence of bugs, and you cannot prove a negative. The best thing you can do is fail to find a bug, but that doesn't mean it isn't there.
You can conclusively (up my next point) prove a specific bug or class of bugs aren't there. But "entirely free of (all) bugs" is indeed a big misconception for what formal methods does.
> how do you know your formal verification is bug-free? Answer: you don't. Or if you try to formally verify your formal verification then you're just translating the problem to a new layer. It's just a chain of proofs that is always ultimately based on an unproven one, which invalidates the whole chain.
It's another misconception of formal methods to say that any result is established conclusively, without any caveats whatsoever. But then again neither is mathematics, or any other intellectual discipline. What formal methods does is reduce the surface area where mistakes could reasonably be expected to reside. Trusting the Rocq kernel, or a highly scrutinized model of computation and language semantics, is much easier than trusting the totality of random unannotated code residing in the foggiest depths of your average C compiler, for instance.
What is up with people saying you cannot prove a negative? Of course you can! (At least in formal settings)
For example it's extremely easy to prove there is no square with diagonals of different lengths. I'm the hard end, Andrew Wiles proved Fermat's Last Theorem which expresses a negative.
That's just a nit though, you're right about the infinite regress problem.
I believe it is rooted in legal proceedings where you are usually not required to in principle because it can be hard and/or impossible.
Eg. it is extremely hard to prove you "weren't there" (eg. at a crime site) if you cannot easily prove you were somewhere else (an affirmative): we do not keep court-admissible record of our whereabouts in case we get suspected of being in a place we were not in.
So it does hold in a number of cases where keeping evidence is required for proof. In software, that evidence would be formal specs and test reports which prove that cases covered with those are indeed working as specced, but provide no proof outside those "specs" (loosely considering an automated test a spec too).
It's pretty easy to prove lots of negatives outside of mathematics, too. It's easy to prove there's no elephant in my living room at the moment.
This guy named Ludwig Wittgenstein would like to have a word with you.
Also, what even is "a negative"? The following statements are equivalent:
"There are no squares with diagonals of different lengths"
"All squares have diagonals of equal lengths"
Similarly, I can rephrase the statement about the absence of bugs. These are equivalent:
"This program has no bugs"
"This program always does exactly what it is supposed to do"
If you think you can't prove the first statement, then go ahead and prove the second one.
Are people thinking of falsification when talking about "proving negatives"? I.e. you can only falsify statements about the physical world, never prove them.
> That means proving the absence of bugs, and you cannot prove a negative.
You can prove that the program implements a specification correctly. That doesn't require proving a negative, but it does prove the absence of bugs. (I think you know this argument is weak, since your next paragraph is a complete non-sequitur)
> Or if you try to formally verify your formal verification then you're just translating the problem to a new layer. It's just a chain of proofs that is always ultimately based on an unproven one, which invalidates the whole chain.
All proofs ultimately rest on axioms. That's normal and not really an argument unless you want to doubt everything, in which case what's the point in ever saying anything?
This comment conflates a number of ideas that obscures its own epistemological point. For example, yes, you can prove a negative (for instance, you can prove there is no largest integer).
The question of who validates the validator is real, but the abstraction of "formal verification" does serve a purpose, like any good mathematical or programming abstraction. Whole classes of bugs are removed; what's left to verify is usually much shorter.
The better look at this is not to prove that you have no bugs is that to prove that a program conforms to a specification that Lean can verify.
Even with the addition of two numbers the execution of a program can be wrong if the CPU has a fault or if the runtime of the program has a bug.
I think you just need to look at why formal verification exists.
This feels like a thought exercise rather than an argument.
By your logic, it's impossible to prove that a car is driving at 60mph. There could be an error in the speedometer which makes it impossible to verify that said car is going at the speed. You can get asymptomatically close to being sure that you're driving at 60 mph but you can never be 100% sure.
This is useless and serves no purpose.
That's not correct. You can prove a car is driving 60mph as soon as you measure it doing that. "proving a negative" is for statements like "There are no purple zebra". You can never prove this because there is always the possibility the purple zebra is somewhere you haven't looked yet. As soon as you find one the statement becomes falsified, but until then it always remains unresolved even if almost certainly true.
Linking back to the parent statement, it's hard to prove a program has no bugs when there is always the possibility the bug just hasn't been found yet. On the flip side it's easy to prove something does have bugs as soon as you find one.
You can prove there is no purple zebra on earth by actually surveying the population of zebras which is finite.
How do you know the one purple zebra wasn't just walking around in a way that meant they were always not where you were looking?
You can probabilistic say "it's extremely unlikely purple zebras exist" but you can never prove 100% they don't exist. And back to the real example, how can you prove there isn't a bug you just haven't found yet?
Even if all your claims are true (which I'm not 100% sold on but bear with me)... who cares? Sure, it won't get literally 100% of the way, but if we can get to a place where the only bugs in software are in the spec, in the proof assistants, and in weird hardware bugs, that would be an enormous win!
I disagree but the constraints required to get there are probably untenable for most practical applications, so in practice I agree.