The article points out that the AI + humans approach gives poorer results. Humans end up deferring to or just accepting the AI output without double checking. So corner cases, and situations where the AI doesn't do well just end up going through the system.
This is what I worry about - when someone gets a little lazy and leans too heavily on the tool. Perhaps their skills diminish over time. It seems AI could be used to review results after an analysis. That would be ok to me, but not before.