Oh no, I am not at all trying to find an explanation of why this is (qualia etc.). There is simply no necessity for that. It is interesting, but not part of the scientific problem that i tried to find an answer to.
The proof (all three of them) holds without any explanatory effort concerning causalities around human frame-jumping etc.
For this paper, It is absolutely sufficient to prove that a) this cannot be reached algorithmically and that b) evidence clearly shows that humans can (somehow) do this , as they have already done this (quite often).
> this cannot be reached algorithmically
> humans can (somehow) do this
Is this not contradictory?
Alternatively, in order to not be contradictory doesn't it require the assumption that humans are not "algorithmic"? But does that not then presuppose (as the above commenter brought up) that we are not a biochemical machine? Is a machine not inherently algorithmic in nature?
Or at minimum presupposes that humans are more than just a biochemical machine. But then the question comes up again, where is the scientific evidence for this? In my view it's perfectly acceptable if the answer is something to the effect of "we don't currently have evidence for that, but this hints that we ought to look for it".
All that said, does "algorithmically" here perhaps exclude heuristics? Many times something can be shown to be unsolvable in the absolute sense yet readily solvable with extremely high success rate in practice using some heuristic.
> Alternatively, in order to not be contradictory doesn't it require the assumption that humans are not "algorithmic"? But does that not then presuppose (as the above commenter brought up) that we are not a biochemical machine? Is a machine not inherently algorithmic in nature?
No, computation is algorithmic, real machines are not necessarily (of course, AGI still can't be ruled out even if algorithmic intelligence is, only AGI that does not incorporate some component with noncomputable behavior.)
> computation is algorithmic, real machines are not necessarily
Author seems to assume the latter condition is definitive, i.e. that real machines are not, and then derive extrapolations from that unproven assumption.
> No, computation is algorithmic, real machines are not necessarily
As the adjacent comment touches on are the laws of physics (as understood to date) not possible to simulate? Can't all possible machines be simulated at least in theory? I'm guessing my knowledge of the term "algorithmic" is lacking here.
Using computation/algorithmic methods we can simulate nonalgorithmic systems. So the world within a computer program can behave in a nonalgorithmic way.
Also, one might argue that universe/laws of physics are computational.
OP seems to have a very confused idea of what an algorithmic process means... they think the process of humans determining what is truthful "cannot possibly be something algorithmic".
Which is certainly an opinion.
> whatever it is: it cannot possibly be something algorithmic
https://news.ycombinator.com/item?id=44349299
Maybe OP should have looked at a dictionary for what certain words actually mean before defining them to be something nonsensical.
> For this paper, It is absolutely sufficient to prove that a) this cannot be reached algorithmically and that b) evidence clearly shows that humans can (somehow) do this , as they have already done this (quite often).
The problem with these kinds of arguments is always that they conflate two possibly related but non-equivalent kinds of computational problem solving.
In computability theory, an uncomputability result essentially only proves that it's impossible to have an algorithm that will in all cases produce the correct result to a given problem. Such an impossibility result is valuable as a purely mathematical result, but also because what computer science generally wants is a provably correct algorithm: one that will, when performed exactly, always produce the correct answer.
However, similarly to any mathematical proof, a single counter-example is enough to invalidate a proof of correctness. Showing that an algorithm fails in a single corner case makes the algorithm not correct in a classical algorithmic sense. Similarly, for a computational problem, showing that any purported algorithm will inevitably fail even in a single case is enough to prove the problem uncomputable -- again, in the classical computability theory sense.
If you cannot have an exact algorithm, for either theoretical or practical reasons, and you still want a computational method for solving the problem in practice, you then turn to heuristics or something else that doesn't guarantee correctness but which might produce workable results often enough to be useful.
Even though something like the halting problem is uncomputable in the classical, always-inevitably-produces-correct-answer-in-finite-time sense, that does not necessarily stop it from being solved in a subset of cases, or to be solved often enough by some kind of a heuristic or non-exact algorithm to be useful.
When you say that something cannot be reached algorithmically, you're saying it's impossible to have an algorithm that would inevitably, systematically, always reach that solution in finite time. And you would in many cases be correct. Symbolic AI research ran into this problem due to the uncomputability of reasoning in predicate logic. (Uncomputability is not the main problem that symbolic AI ran into but it was one of them.)
The problem is that when you say that humans can somehow do this computationally impossible thing, you're not holding human cognition or problem solving to the same standard of computational correctness. We do find solutions to problems, answers to questions, and logical chains of reasoning, but we aren't guaranteed to.
You do seem to be aware of this, of course.
But you then run into the inevitable question of what you mean by AGI. If you hold AGI to the standard of classical computational correctness, to which you don't hold humans, you're correct that it's impossible. But you have also proven nothing new.
A more typical understanding of AGI would be something similar to human cognition -- not having formal guarantees but working well enough for operating in, understanding, or producing useful results the real world. (Human brains do that well in the real world -- thanks to having evolved in it!)
In the latter case, uncomputability results do not prove that kind of AGI to be impossible.