TFA presents an information-theoretic argument forAGI being impossible. My reading of your parent commenter is that they are asking why this argument does not also apply to humans.
You make broadly valid points, particularly about the advantages of embodyment, but I just dont think theyre good responses to the theoretical article under discussion (or the comment that you were responding to).