This was by far the most interesting part to me. I've never considered that code and proofs can be so complementary. It would be great if someone did this for all math proofs!
"Why include a script rather than a proof? One reason is that the proof is straight-forward but tedious and the script is compact.
A more general reason that I give computational demonstrations of theorems is that programs are complementary to proofs. Programs and proofs are both subject to bugs, but they’re not likely to have the same bugs. And because programs made details explicit by necessity, a program might fill in gaps that aren’t sufficiently spelled out in a proof."
As a kid, I was marginally decent at competitive math. Not good like you think of kids who dominate those type of competitions at a high level, but like I could qualify for the state competition type good.
What I was actually good, or at least fast at, was TI-Basic, which was allowed in a lot of cases (though not all). Usually the problems were set up so you couldn’t find the solution using just the calculator, but if you had a couple of ideas and needed to choose between them you could sometimes cross off the wrong ones with a program.
The script the author gives isn’t a proof itself, unless the proposition is false, in which case a counter example always makes a great proof :p
I used to do the same thing. I'd scan for problems on the test amenable to computational approaches and either pull up one of my custom made programs or write one on the spot and let it churn in the background for a bit while I worked on other stuff without the calculator.
This is misleading in that the (Curry–Howard) correspondence is between proofs and the static typing of programs. A bug in a proof therefore corresponds to a bug in the static typing of a program (or to the type system of the programming language being unsound), not to any other program bug.
(Also: complementary != complimentary.)
I don't think the author is referring to the C–H correspondence. Just the fact that
a) it can be actually helpful to check that some property holds up to one zillion, even though it's not a proof that it holds for all numbers; and
b) if a proof has a bug, a program checking the relevant property up to one zillion is not unlikely to produce a counterexample.
> Also: complementary != complimentary
I'm gonna blame autocorrect for that one, but appreciate you catching it. Fixed! :)
The point is to not be so tight, leaning on the correspondence. The fact that you’re coming at the problem differently (even that it’s a different problem, “for some” versus “for all”) is actually helpful. You’re less likely to make the same mistake in both.
There’s a technique for unit testing where you write the code in two languages. If you just used a compiler and were more confident about correspondence, that would miss the point. The point is to be of a different mind and using different tools.
i think this is wrong. code is proofs, types are propositions
Code is proof that the operation embodied by the code works. I don't understand how it proves anything more generally than that, apart from code using exotic languages or techniques intended for just that purpose.
Well, in theory (and I guess more generally philosophy) land, sure, you can't really "prove absoluteness" outside of your axioms and assumptions. You need to have a notion of true and false, and then implications, for example, to do logic, then whatever the leap from there it takes to do set theory, then go up from there etc. it's turtles all the way down.
In practice land (real theorem provers), I guess the idea is that, it theoretically should be a perfect logic engine. Two issues:
1. What if there's a compiler bug?
2. How do I "know" that I actually compiled "what I meant" to this logic engine?
(which are re-statements of what I said in theory land). You are given, that supposedly, within your internal logic engine, you have a proof, and you want to translate it to a "universal" one.
I guess the idea is, in practice, you just hope that slight perturbations to either your mental model, the translation, or even the compiler itself, just "hard fail". Just hope it's a very not-continuous space and violating boundaries fail the self-consistency check.
(As opposed to, for example, physical engineering, which generally doesn't allow hard failure and has a bunch of controls and guards in mind, and it's very much a continuuum).
A trivial example is how easy it is to just typo a constant or a variable name in a normal programming language, and the program still compiles fine (this is why we have tests!). The idea is, that, down from trivial errors like that, all the way up to fundamental misconceptions and such, you can catch preturbations to the ideal, I guess, be they small or large. I think what makes one of these theorem provers minimally good, is that you can't easily, accidentally encode a concept wrong (from high level model A to low level theorem proving model B), for a variety of reasons. Then of course, runtime efficiency, ergonomics etc. come later.
Of course, this brings into notion just how "powerful" certain models bring - my friend is doing a research project with these, something as simple as "proving a dfs works to solve a problem" is apparently horrible.
The types are the propositions proved by the proof. The proof is correct <=> the program is soundly typed.
I came here to quote that entire section as well I’m glad I checked the comments first.
I’ve never seen a more succinct explanation of the value of coding up scripts to demonstrate proofs.
I think I’ll tighten it up to “proofs have bugs” in the future.
[dead]