> What does it say about the tech industry that so many orgs continued to use a system that was known to be broken for so long?

Well, broken in what sense? In the sense that HN complains about it? Then it says nothing at all because that is not a signal of anything. HN reliably complains about ALL interview methods including coding challenges, chats about experience, take home exams, paid day-of-work, and so on.

If it is "broken" in another sense, we would have to quantify that and compare it to some alternative. I'm not aware of any good quantitative data on this sort of thing and I can imagine that the challenges to meaningfully measuring this are considerable. A qualitative survey of the tech industry is likely to surface opinions as wide ranging as HN complaints.

Basically you start with the conclusion that what we have is extremely sub optimal, but that's not clear to me. I think it is uncontroversial to assert that coding challenges are not an amazing predictor of job performance and lead to lots of false positives and negatives. Easy to find lots of agreement there. What is unclear is that there are other methods that are better at predicting job performance.

It may just be that predicting job performance is extremely difficult and that even a relatively good method for doing so is not very good.