> The fact that the access speedbump which supposedly worked in the past is no longer good enough is proof that an access speedbump is not a good way to do it.

It worked well enough for a pretty long time. No solution can be expected to work forever, we just need to modify the restrictions on criminal histories to keep up with the times. It's perfectly normal to have to reassess and make adjustments to access controls over time, not only because of technology changes, but also to take into account new problems with the use/misuse of the data being restricted and our changing values and expectations for how that data should be used and accessed.

> If you don't want algorithms to come to spurious and discriminatory conclusions, you must make algorithms auditable, and give the public reasonable access to interrogate these algorithms that impact them.

I think we'd have much better success restricting access to the data than handing it out freely and trying to regulate what everyone everywhere does with that data after they already have it. AI in particular will be very hard to regulate (as much as I agree that transparent/auditable systems are what we want), and I don't expect we'd have much success regulating what companies do behind closed doors or force them to be transparent about their use of AI

We both agree that companies should be held liable for the discriminatory outcomes of their hiring practices no matter if they use AI or not. The responsibility should always fall on the company and humans running the show no matter what their tools/processes are since they decide which to use and how to use them.

We also agree that discrimination itself should be outlawed, but that remains an unsolved problem since detection and enforcement are extremely difficult. It's easier to limit the opportunity to discriminate than try to catch companies in the act. You mention that hiding people's race doesn't work, but that's actually being explored as a means to avoid bias in hiring. For example, stripping names and addresses (which can hint at race) before passing resumes to algorithms seems like it could help reduce unintentional discrimination.

Ultimately, there'll always be opportunities for a bigot to discriminate in the hiring process but I think we can use a multifaceted approach to limit those opportunities and hopefully force them to act more explicitly making deliberate discrimination a little easier to catch.