The problem is when companies dodge responsibility for what their AI does, and these laws prevent updating the law to handle that. If your employees reject black loan applicants instantly, that's a winnable lawsuit. If your AI happens to reject all black loan applicants, you can hide behind the algorithm.
If your employees reject black loan applicants because they're black, that's a winnable lawsuit. If they reject black loan applicants because it happens the black loan applicants had bad credit, not so much.
Why are we treating AI like something different? If it's given the race of the applicants and that causes it to reject black applicants, it's doing something objectionable. If it's given the race of the applicants but that doesn't significantly change its determinations, or it isn't given their race to begin with, it's not.
The trouble is people have come up with this ploy where they demand no racial disparity in outcomes even when there are non-racial factors (e.g. income, credit history) that correlate with race and innately result in a disparity.
A cynic would say that plaintiff lawyers don't like algorithms that reduce human bias because filing lawsuits over human bias is how they get paid.