> These "invisible AI" tools aren't destroying technical interviews - they're just exposing how broken they already were.

Whenever the topic about how broken tech interviews are has showed up on HN in the past, there were usually two crowds: the "they suck, but they're the best we've got" people, and the much less common "they suck, so we do something else" crowd. Almost everyone agreed they were broken.

What does it say about the tech industry that so many orgs continued to use a system that was known to be broken for so long? How much inefficiency and waste over the past couple of decades is attributable to bad hires? And conversely, how many efficiency improvements in the near future are going to get attributed to AI tech rather than the side effect of improved interviewing practices meant to combat AI candidates?

> What does it say about the tech industry that so many orgs continued to use a system that was known to be broken for so long?

Well, broken in what sense? In the sense that HN complains about it? Then it says nothing at all because that is not a signal of anything. HN reliably complains about ALL interview methods including coding challenges, chats about experience, take home exams, paid day-of-work, and so on.

If it is "broken" in another sense, we would have to quantify that and compare it to some alternative. I'm not aware of any good quantitative data on this sort of thing and I can imagine that the challenges to meaningfully measuring this are considerable. A qualitative survey of the tech industry is likely to surface opinions as wide ranging as HN complaints.

Basically you start with the conclusion that what we have is extremely sub optimal, but that's not clear to me. I think it is uncontroversial to assert that coding challenges are not an amazing predictor of job performance and lead to lots of false positives and negatives. Easy to find lots of agreement there. What is unclear is that there are other methods that are better at predicting job performance.

It may just be that predicting job performance is extremely difficult and that even a relatively good method for doing so is not very good.

> What does it say about the tech industry that so many orgs continued to use a system that was known to be broken for so long?

That they tried to scale hiring. I don't think that technical interviews are bad per se, it's the form in which they are currently that is bad. I was once applying for a company working on a stage animation software - somebody there created a task relevant to their workflow, they specified which libraries I should use and then gave me 14 days, after which I sent them a link to Github repo with the code. It was overall a pleasant experience - I learned something new programing this task, they were able to access my code style and programing abilities - but this was long before AI. They were a fairly small company though.

In the large orgs you have a HR person overseeing a hiring who doesn't know how to code (fair enough - not their job) and bunch of engineers usually far to busy with actual problems to create a problem set and far to many applicants. So companies drop leetcode at the problem - thinking that any selection criteria is better than non selection criteria.

> the "they suck, but they're the best we've got" people

I'm in this camp and I don't think it's "broken" - at least not in the sense that we have a broadly applicable "fix" that improves the situation.

I.e., everybody hates coding interviews but it's still close to the best we have got, and it beats credentialism or talking about random stuff (how are you going to compare across candidates).

> Almost everyone agreed they were broken.

Many? Yes. Almost everyone? No.