I don't think this is an important distinction. The point of Turing machines is that you can ask questions like "can all instances of this problem be solved uniformly, and how relatively expensive is it?". This requires infinite memory in a formal sense, because instances of (most) problems can be arbitrarily large.

Yes, if you ask the same question with a fixed (finite) memory restriction on everything, the answer is uninteresting. To me, this is... uninteresting. It tells you nothing about the underlying logical structure of your problem, which is what mathematicians are really trying to get at!

(first and foremost Turing machines are a tool for analysing problems mathematically, not a model of your laptop)

Also note that "states" in your sense are not the same as "states" in a Turing machine (which are just one of the inputs to the transition function). There are Turing machines with less than ten thousand states whose behaviour is undecidable in ZFC (https://arxiv.org/abs/1605.04343).

> Yes, if you ask the same question with a fixed (finite) memory restriction on everything, the answer is uninteresting. To me, this is... uninteresting. It tells you nothing about the underlying logical structure of your problem, which is what mathematicians are really trying to get at!

> (first and foremost Turing machines are a tool for analysing problems mathematically, not a model of your laptop)

I think what makes Animats's distinction worth stressing is that a lot of people miss this and fall into the trap of making false/misleading claims about the impact of the halting problem on actual computers/software.

For instance, claiming that the halting problem proves there are problems humans can solve that computers fundamentally can't, giving "the halting problem makes this impossible" as reason against adding a halt/loop detector to a chip emulator, making all sorts of weird claims about AI[0][1], or arguably even this author's claim that a halt detector would make bcrypt cracking trivial.

[0]: https://mindmatters.ai/2025/02/agi-the-halting-problem-and-t...

[1]: https://arxiv.org/pdf/2407.16890

To be fair to [0], Leonid Levin (co-discoverer of NP-completeness) makes a similar but slightly weaker claim in a more formal sense: that no algorithm or even random process can increase mutual algorithmic information between two strings (beyond O(1)), which includes any process that attempts to increase mutual information with the halting sequence (https://cs-web.bu.edu/fac/lnd/dvi/IIjacm.pdf).

Nevertheless, we clearly do have some finite amount of information about this sequence, evident in the axioms of PA or ZFC or any other formal system that proves an infinite number of programs as non-halting (hence why the Busy Beaver project has been able to provably confirm BB(5)). We presume these systems are truly consistent and sound even if that fact is itself unprovable, so then where exactly did the non-halting information in the axioms of these systems “come from”?

Levin stops short of speculating about that, simply leaving it at the fact that what we have so far cannot be extended further by either algorithmic or random means. But if that’s the case, then either AI is capped at these same predictive limits as well (i.e., in the sense of problems AI could solve that humans could not, both given unlimited resources), or there is additional non-halting information embedded in the environment that an AI could “extract” better than a human. (I suppose it’s also possible we haven’t fully exploited the non-halting information we do have, but I think that’s unlikely since we’re not even sure whether BB(6) is ZFC-provable and that’s a rather tiny machine).

What is the halting sequence? It isn't mentioned in your linked article anywhere (or on google), and makes the rest of your response a little hard to make sense of.

He mentions it on the second page:

> any Martin-Lof random sequence that computes (i.e. allows computing from it) a consistent completion of PA also computes the Halting Problem H; and by deLeeuw et al. [1965], only a recursive sequence (which H is not) can be computed with a positive probability by randomized algorithms.

It's just the sequence of the solutions to the halting problem, i.e., the characteristic function of the halting set. Levin points out this sequence is not recursive/decidable.

Thanks - I have to admit there's far too much jargon in there for me to make heads or tails of the main claims, let alone say anything sensible about your original reply. I can't even work out how your statement that no algorithmic process can increase the mutual information between two given strings is formalised in the language of the paper, since there seems to be an obvious counterexample - the mutual information between an infinite string of 0s and the halting sequence can be made arbitrarily large by a program with oracle access to the halting sequence by simply transforming each element into the corresponding element of the halting sequence. So of course I am missing something.

But it seems interesting, thanks for the link =)

That's fair, I've run into a lot of misconceptions like that before. And probably believed some of them at earlier points in my life =P

(Roger Penrose himself makes a version of that claim in "the Emperor's New Mind", so we're in good company...)

About the bcrypt thing... yeah, so one way you can do it is write a program that generates all possible keys starting with the letter 'a', and tests them. Then you run your halting oracle on that, and if it returns true you know the first letter of the key is 'a'.

Do this for each index/letter combination and you can crack bcrypt keys in linear time (in the number of calls to your halting oracle).

> About the bcrypt thing... yeah, so one way you can do it is write a program that generates all possible keys starting with the letter 'a', and tests them. Then you run your halting oracle on that, and if it returns true you know the first letter of the key is 'a'.

And you really can do that. For actual computers we have cycle detection algorithms that can tell you definitively if your program will halt or not.

Determining whether the program halts can still be complex or slow, which is the case here, but the halting problem does not make it undecidable (because it doesn't apply to real computers) nor place any lower bound on that complexity (nothing preventing a more advanced halt detector from finding better shortcuts).

The conversation is going in circles here =) I agree with everything you have written.

[deleted]

> (Roger Penrose himself makes a version of that claim in "the Emperor's New Mind", so we're in good company...)

Right. That leads to the whole 'brains are special and analog and quantum or something and thus can't be emulated digitally' line of thought. That line of argument is looking rather threadbare since LLMs got good. That's a different issue than decidability, though. Penrose does raise a good question as to how biological brains get so much done with so little power and rather low-frequency signals.