Code is proof that the operation embodied by the code works. I don't understand how it proves anything more generally than that, apart from code using exotic languages or techniques intended for just that purpose.
Well, in theory (and I guess more generally philosophy) land, sure, you can't really "prove absoluteness" outside of your axioms and assumptions. You need to have a notion of true and false, and then implications, for example, to do logic, then whatever the leap from there it takes to do set theory, then go up from there etc. it's turtles all the way down.
In practice land (real theorem provers), I guess the idea is that, it theoretically should be a perfect logic engine. Two issues:
1. What if there's a compiler bug?
2. How do I "know" that I actually compiled "what I meant" to this logic engine?
(which are re-statements of what I said in theory land). You are given, that supposedly, within your internal logic engine, you have a proof, and you want to translate it to a "universal" one.
I guess the idea is, in practice, you just hope that slight perturbations to either your mental model, the translation, or even the compiler itself, just "hard fail". Just hope it's a very not-continuous space and violating boundaries fail the self-consistency check.
(As opposed to, for example, physical engineering, which generally doesn't allow hard failure and has a bunch of controls and guards in mind, and it's very much a continuuum).
A trivial example is how easy it is to just typo a constant or a variable name in a normal programming language, and the program still compiles fine (this is why we have tests!). The idea is, that, down from trivial errors like that, all the way up to fundamental misconceptions and such, you can catch preturbations to the ideal, I guess, be they small or large. I think what makes one of these theorem provers minimally good, is that you can't easily, accidentally encode a concept wrong (from high level model A to low level theorem proving model B), for a variety of reasons. Then of course, runtime efficiency, ergonomics etc. come later.
Of course, this brings into notion just how "powerful" certain models bring - my friend is doing a research project with these, something as simple as "proving a dfs works to solve a problem" is apparently horrible.
Code is proof that the operation embodied by the code works. I don't understand how it proves anything more generally than that, apart from code using exotic languages or techniques intended for just that purpose.
Well, in theory (and I guess more generally philosophy) land, sure, you can't really "prove absoluteness" outside of your axioms and assumptions. You need to have a notion of true and false, and then implications, for example, to do logic, then whatever the leap from there it takes to do set theory, then go up from there etc. it's turtles all the way down.
In practice land (real theorem provers), I guess the idea is that, it theoretically should be a perfect logic engine. Two issues:
1. What if there's a compiler bug?
2. How do I "know" that I actually compiled "what I meant" to this logic engine?
(which are re-statements of what I said in theory land). You are given, that supposedly, within your internal logic engine, you have a proof, and you want to translate it to a "universal" one.
I guess the idea is, in practice, you just hope that slight perturbations to either your mental model, the translation, or even the compiler itself, just "hard fail". Just hope it's a very not-continuous space and violating boundaries fail the self-consistency check.
(As opposed to, for example, physical engineering, which generally doesn't allow hard failure and has a bunch of controls and guards in mind, and it's very much a continuuum).
A trivial example is how easy it is to just typo a constant or a variable name in a normal programming language, and the program still compiles fine (this is why we have tests!). The idea is, that, down from trivial errors like that, all the way up to fundamental misconceptions and such, you can catch preturbations to the ideal, I guess, be they small or large. I think what makes one of these theorem provers minimally good, is that you can't easily, accidentally encode a concept wrong (from high level model A to low level theorem proving model B), for a variety of reasons. Then of course, runtime efficiency, ergonomics etc. come later.
Of course, this brings into notion just how "powerful" certain models bring - my friend is doing a research project with these, something as simple as "proving a dfs works to solve a problem" is apparently horrible.
The types are the propositions proved by the proof. The proof is correct <=> the program is soundly typed.