That doesn't seem like a useful argument either.
If future AI only manages to solve the variance problem, then it will have problems related to bias.
If future AI only manages to solve the bias problem, then it will have problems related to variance.
If problem X is solved, then the system that solved it won't have problem X. That's not very informative without some idea of how likely it is that X can or will be solved, and current AI is a better prior than "something will happen".
> That's not very informative without some idea of how likely it is that X can or will be solved
Exactly, the authors argument would be much better qualified by addressing this assumption.
> current AI is a better prior than "something will happen".
“Current AI” is not a prior, its a static observation.