> For this paper, It is absolutely sufficient to prove that a) this cannot be reached algorithmically and that b) evidence clearly shows that humans can (somehow) do this , as they have already done this (quite often).
The problem with these kinds of arguments is always that they conflate two possibly related but non-equivalent kinds of computational problem solving.
In computability theory, an uncomputability result essentially only proves that it's impossible to have an algorithm that will in all cases produce the correct result to a given problem. Such an impossibility result is valuable as a purely mathematical result, but also because what computer science generally wants is a provably correct algorithm: one that will, when performed exactly, always produce the correct answer.
However, similarly to any mathematical proof, a single counter-example is enough to invalidate a proof of correctness. Showing that an algorithm fails in a single corner case makes the algorithm not correct in a classical algorithmic sense. Similarly, for a computational problem, showing that any purported algorithm will inevitably fail even in a single case is enough to prove the problem uncomputable -- again, in the classical computability theory sense.
If you cannot have an exact algorithm, for either theoretical or practical reasons, and you still want a computational method for solving the problem in practice, you then turn to heuristics or something else that doesn't guarantee correctness but which might produce workable results often enough to be useful.
Even though something like the halting problem is uncomputable in the classical, always-inevitably-produces-correct-answer-in-finite-time sense, that does not necessarily stop it from being solved in a subset of cases, or to be solved often enough by some kind of a heuristic or non-exact algorithm to be useful.
When you say that something cannot be reached algorithmically, you're saying it's impossible to have an algorithm that would inevitably, systematically, always reach that solution in finite time. And you would in many cases be correct. Symbolic AI research ran into this problem due to the uncomputability of reasoning in predicate logic. (Uncomputability is not the main problem that symbolic AI ran into but it was one of them.)
The problem is that when you say that humans can somehow do this computationally impossible thing, you're not holding human cognition or problem solving to the same standard of computational correctness. We do find solutions to problems, answers to questions, and logical chains of reasoning, but we aren't guaranteed to.
You do seem to be aware of this, of course.
But you then run into the inevitable question of what you mean by AGI. If you hold AGI to the standard of classical computational correctness, to which you don't hold humans, you're correct that it's impossible. But you have also proven nothing new.
A more typical understanding of AGI would be something similar to human cognition -- not having formal guarantees but working well enough for operating in, understanding, or producing useful results the real world. (Human brains do that well in the real world -- thanks to having evolved in it!)
In the latter case, uncomputability results do not prove that kind of AGI to be impossible.