"model failures become increasingly dominated by incoherence rather than systematic misalignment."
This should not be surprising.
Systematic misalignment, i.e., bias, is still coherent and rational, if it is to be systematic. This would require that AI reason, but AI does not reason (let alone think), it does not do inference.