Political parties hitching their wagon to "AI good" or "AI bad" aside, I'm actually a huge fan of this sort of anti-law. Legislators have been far too eager to write laws about computers and the Internet and other things they barely understand lately. A law that puts a damper on all that might give them time to focus on things that actually matter to their constituents instead of beating the tired old drum of "we've got to do something about this new tech."

The problem is when companies dodge responsibility for what their AI does, and these laws prevent updating the law to handle that. If your employees reject black loan applicants instantly, that's a winnable lawsuit. If your AI happens to reject all black loan applicants, you can hide behind the algorithm.

If your employees reject black loan applicants because they're black, that's a winnable lawsuit. If they reject black loan applicants because it happens the black loan applicants had bad credit, not so much.

Why are we treating AI like something different? If it's given the race of the applicants and that causes it to reject black applicants, it's doing something objectionable. If it's given the race of the applicants but that doesn't significantly change its determinations, or it isn't given their race to begin with, it's not.

The trouble is people have come up with this ploy where they demand no racial disparity in outcomes even when there are non-racial factors (e.g. income, credit history) that correlate with race and innately result in a disparity.

A cynic would say that plaintiff lawyers don't like algorithms that reduce human bias because filing lawsuits over human bias is how they get paid.

Everybody gangsta until AI deletes 90% of white collar jobs.