This is misleading in that the (Curry–Howard) correspondence is between proofs and the static typing of programs. A bug in a proof therefore corresponds to a bug in the static typing of a program (or to the type system of the programming language being unsound), not to any other program bug.
(Also: complementary != complimentary.)
I don't think the author is referring to the C–H correspondence. Just the fact that
a) it can be actually helpful to check that some property holds up to one zillion, even though it's not a proof that it holds for all numbers; and
b) if a proof has a bug, a program checking the relevant property up to one zillion is not unlikely to produce a counterexample.
> Also: complementary != complimentary
I'm gonna blame autocorrect for that one, but appreciate you catching it. Fixed! :)
The point is to not be so tight, leaning on the correspondence. The fact that you’re coming at the problem differently (even that it’s a different problem, “for some” versus “for all”) is actually helpful. You’re less likely to make the same mistake in both.
There’s a technique for unit testing where you write the code in two languages. If you just used a compiler and were more confident about correspondence, that would miss the point. The point is to be of a different mind and using different tools.
i think this is wrong. code is proofs, types are propositions
Code is proof that the operation embodied by the code works. I don't understand how it proves anything more generally than that, apart from code using exotic languages or techniques intended for just that purpose.
Well, in theory (and I guess more generally philosophy) land, sure, you can't really "prove absoluteness" outside of your axioms and assumptions. You need to have a notion of true and false, and then implications, for example, to do logic, then whatever the leap from there it takes to do set theory, then go up from there etc. it's turtles all the way down.
In practice land (real theorem provers), I guess the idea is that, it theoretically should be a perfect logic engine. Two issues:
1. What if there's a compiler bug?
2. How do I "know" that I actually compiled "what I meant" to this logic engine?
(which are re-statements of what I said in theory land). You are given, that supposedly, within your internal logic engine, you have a proof, and you want to translate it to a "universal" one.
I guess the idea is, in practice, you just hope that slight perturbations to either your mental model, the translation, or even the compiler itself, just "hard fail". Just hope it's a very not-continuous space and violating boundaries fail the self-consistency check.
(As opposed to, for example, physical engineering, which generally doesn't allow hard failure and has a bunch of controls and guards in mind, and it's very much a continuuum).
A trivial example is how easy it is to just typo a constant or a variable name in a normal programming language, and the program still compiles fine (this is why we have tests!). The idea is, that, down from trivial errors like that, all the way up to fundamental misconceptions and such, you can catch preturbations to the ideal, I guess, be they small or large. I think what makes one of these theorem provers minimally good, is that you can't easily, accidentally encode a concept wrong (from high level model A to low level theorem proving model B), for a variety of reasons. Then of course, runtime efficiency, ergonomics etc. come later.
Of course, this brings into notion just how "powerful" certain models bring - my friend is doing a research project with these, something as simple as "proving a dfs works to solve a problem" is apparently horrible.
The types are the propositions proved by the proof. The proof is correct <=> the program is soundly typed.