There’s not a useful argument here. The article is using current AI to extrapolate future AI failure modes. If future AI models solve the ‘incoherence’ problem, that leaves bias as a primary source of failure (according to the author these are the only two possible failure modes apparently).

That doesn't seem like a useful argument either.

If future AI only manages to solve the variance problem, then it will have problems related to bias.

If future AI only manages to solve the bias problem, then it will have problems related to variance.

If problem X is solved, then the system that solved it won't have problem X. That's not very informative without some idea of how likely it is that X can or will be solved, and current AI is a better prior than "something will happen".

> That's not very informative without some idea of how likely it is that X can or will be solved

Exactly, the authors argument would be much better qualified by addressing this assumption.

> current AI is a better prior than "something will happen".

“Current AI” is not a prior, its a static observation.