Ok, I read this post quite a while ago and something about the reasoning bothered me then, and it still bothers me now.
In short, my read is that the argument does not rule out that there is a computable function that grows faster than BB(N), but rather it shows that it is impossible to prove or “decide” whether a given computable function grows faster than BB(N).
Maybe this is equivalent to the conclusion stated? Am I missing something obvious? (That sees likely; Scott Aaronson is much better at this than me.)
Edited for clarity
It is definitely way too easy to accidentally confuse "x is true" and "x is proven to be true" on topics of undecidability, and this is one of those times where the answer is in fact obvious, but it takes you 15 minutes to realize that it's obvious.
The halting problem is uncomputable (by diagonalization)--that is no computable function can solve the halting problem, rather than the weaker statement that we cannot assert that "this" function solves it. As a result, if somebody asserts a function that purports to solve the halting problem, then we can definitively say that either it isn't computable, or the thing it solves isn't the halting problem.
For the Busy Beaver function, it's obvious that the resulting program (if it existed) would solve the halting problem, so clearly it's not a computable function, and analyzing what part of it isn't computable leads you to the Busy Beaver as the only option.
For the D(N) function... well, since we assume D(N) ≥ BB(N), D(N) is still an upper bound on the number of steps a halting TM could run, so the resulting program using D(N) in lieu of BB(N) would still solve the halting problem, which forces us to conclude that it's not computable.
A different argument that may make more sense is this:
Consider the program "if machine M has not halted after f(sizeof M) steps, print not halt, else print halt." If f is a computable function, then the program is clearly computable. But since no computable program can solve the halting problem, we know that this program cannot either. Therefore, for every computable function f, there must exist some machine M such that M halts only after more than f(sizeof M) steps. In other words, f cannot be an upper bound on the Busy Beaver function.
This indeed helps me! The key distinction that there is no computable function that solves the halting problem, and whether we can prove it is irrelevant, clarifies the issue. Thanks, I definitely learned something here, even if it’s obvious in retrospect.
Many of the most interesting theorems in math are obvious only in retrospect.
Any computable function f on one variable x has a program. That function is a program of size p. The input x also has a data size d. BB(p+d) >= f(x), by definition, for all f and x. If you think you might have a (function, input) pair (and corresponding (program, data) pair) for which this is not true, see the previous sentence.
This approach leaves open the possibility that f(x) = BB(p+d) right?
No, because f is assumed to be computable from the start, which BB is not (otherwise it could be used as a subroutine in a program that solves the halting problem).
I think Scott's reasoning is correct in the end. If you suppose you had a computable function f(N) such that f(N) is always greater than BB(N). Then you could exploit the function f to solve the halting problem. Given a program of length N, run the program for f(N) steps. If it halts within that time, you know it's a halting program. If it doesn't halt within that time you know it will never halt.
But does "a computable function that grows faster than BB(N)" actually mean that f(N) is always greater than BB(N), or could it be taken to simply mean that f(N) > BB(N) for a proportion of N that trends towards 1? In other words, could you have a function that almost always upper bounds BB(N), except for a tiny (but still technically infinite) proportion of exceptions? Such a function would be larger than BB in a meaningful sense, but if the sequence of exceptions is not computable, it wouldn't be possible to solve the halting problem with it. There may be a problem with this concept, but it doesn't appear obvious to me.
> "a computable function that grows faster than BB(N)"
In the context that Scott Aaronson was using it probably yes as that is necessary for the most straightforward /obvious proof to work.
I am curious if there is a proof for the less obvious case, though. Could a function be greater than BB most of the time? What are the conditions, exactly?
Yes, this I understand. I agree that it is impossible to “prove” that a computable function f(N) is always greater than BB(N).
My concern is that the argument leaves open the possibility of a larger computable function, even if it would be impossible to demonstrate that it is in fact larger for all N.
I’m sure that this possibility is somehow foreclosed (that is, I’m not trying to say that the claim is wrong, just that I think there is a case that isn’t covered by the argument). But I don’t quite see it.
No, it has been proven that there exists no turing machine that can solve the halting problem. If we assume that there exists a computable f(N) that is always greater than BB(N) then we can construct a Turing machine that solves the halting problem. Therefore no computable f(N) can exist.
This was the key missing part for me; the program can’t exist, whether we can prove it is beside the point. Thanks.
I think the issue is related to notion of a "direct" proof vs "proof by contradiction". What I gave was a proof by contradiction. Once you start thinking about the philosophy of proofs it, it's an interesting question to ask whether or not proofs by contradiction should really count. I don't know much about this but you can search up the term "Law of the excluded middle" is a good place to start reading about these concepts.
Trust me, I understand proof by contradiction :). The point I was missing was the difference between an undecidable question vs impossible. See the rest of the thread above.
It sounds to me like you have a good line of thought.
The key is that we can't prove that your function f grows faster than BB. That makes all the difference.
Aaronson’s argument shows by contradiction that a computable upper bound D(N) that grows more rapidly than BB(N) cannot exist because otherwise we’d be able to use it to solve the halting problem.