I've spent the last decade watching this arms race between interviewers and candidates. Last month I hired a senior dev who couldn't implement a basic database migration when we brought him on but aced our interview problems. Turned out he'd been using tools like this.

The problem isn't the tools - they're inevitable. The problem is that our industry clings to this bizarre ritual where we test for skills that are completely orthogonal to the actual job.

My current team scrapped the algorithmic questions entirely. We now do pair programming on a small feature in our actual codebase, with full access to Google/docs/AI. The only restriction is we watch how they work. This approach has dramatically improved our hit rate on good hires.

What I care about is: Can they reason through a problem? Do they ask good clarifying questions? Can they navigate unfamiliar code? Do they know when to use tools vs when to think?

These "invisible AI" tools aren't destroying technical interviews - they're just exposing how broken they already were.

> These "invisible AI" tools aren't destroying technical interviews - they're just exposing how broken they already were.

Whenever the topic about how broken tech interviews are has showed up on HN in the past, there were usually two crowds: the "they suck, but they're the best we've got" people, and the much less common "they suck, so we do something else" crowd. Almost everyone agreed they were broken.

What does it say about the tech industry that so many orgs continued to use a system that was known to be broken for so long? How much inefficiency and waste over the past couple of decades is attributable to bad hires? And conversely, how many efficiency improvements in the near future are going to get attributed to AI tech rather than the side effect of improved interviewing practices meant to combat AI candidates?

> What does it say about the tech industry that so many orgs continued to use a system that was known to be broken for so long?

Well, broken in what sense? In the sense that HN complains about it? Then it says nothing at all because that is not a signal of anything. HN reliably complains about ALL interview methods including coding challenges, chats about experience, take home exams, paid day-of-work, and so on.

If it is "broken" in another sense, we would have to quantify that and compare it to some alternative. I'm not aware of any good quantitative data on this sort of thing and I can imagine that the challenges to meaningfully measuring this are considerable. A qualitative survey of the tech industry is likely to surface opinions as wide ranging as HN complaints.

Basically you start with the conclusion that what we have is extremely sub optimal, but that's not clear to me. I think it is uncontroversial to assert that coding challenges are not an amazing predictor of job performance and lead to lots of false positives and negatives. Easy to find lots of agreement there. What is unclear is that there are other methods that are better at predicting job performance.

It may just be that predicting job performance is extremely difficult and that even a relatively good method for doing so is not very good.

> What does it say about the tech industry that so many orgs continued to use a system that was known to be broken for so long?

That they tried to scale hiring. I don't think that technical interviews are bad per se, it's the form in which they are currently that is bad. I was once applying for a company working on a stage animation software - somebody there created a task relevant to their workflow, they specified which libraries I should use and then gave me 14 days, after which I sent them a link to Github repo with the code. It was overall a pleasant experience - I learned something new programing this task, they were able to access my code style and programing abilities - but this was long before AI. They were a fairly small company though.

In the large orgs you have a HR person overseeing a hiring who doesn't know how to code (fair enough - not their job) and bunch of engineers usually far to busy with actual problems to create a problem set and far to many applicants. So companies drop leetcode at the problem - thinking that any selection criteria is better than non selection criteria.

> the "they suck, but they're the best we've got" people

I'm in this camp and I don't think it's "broken" - at least not in the sense that we have a broadly applicable "fix" that improves the situation.

I.e., everybody hates coding interviews but it's still close to the best we have got, and it beats credentialism or talking about random stuff (how are you going to compare across candidates).

> Almost everyone agreed they were broken.

Many? Yes. Almost everyone? No.

> The problem is that our industry clings to this bizarre ritual where we test for skills that are completely orthogonal to the actual job.

I agree. And it's not a good sign for the industry when engineers have to spend months working up to being good at answering these questions just to get the job.

When I think of the hundreds of hours I’ve devoted to learning how to solve leetcode problems that could’ve otherwise been spent on learning tools, technologies, and architecture patterns that would be useful for my actual job, I genuinely feel a deep, deep sorrow.

That trend was always comical to me. I’ve never been one to memorize documentation, specific function call orders, etc, and I remember miserably failing a coding test for Comcast. The test? Write a full contact management application on an isolated computer, no IDE, using notepad and no installed runtime.

You’re spot on with the uselessness of modern algorithmic questions. We live in a world of high level languages; in 20+ years, I’ve never implemented a single one of those coding problems in an actual codebase. I would prefer to hire an intelligent, sociable person with core skills and who has room to learn and grow, rather than someone who can ace silly algorithms but can’t work with a team or complex codebase.

Heck, the person you refer to having hired, and presumably fired, may not have even used these tools. The trend these days is to learn to the coding tests from sites we all know about, not the ideas or concepts needed to work in a real team on a real application. They spend all their time writing algorithms, but often never actually touch a database/write a real complex application. The bootcamp explosion was a self-escalating problem for algorithm-based test questions.

How did you know he used tools like this one? You sit him down and interrogate him? Or did they own up to it? What was the result? Did you cut them loose?

When did you switch to pair programming? Your post starts by stating you just hired the unqualified dev last month, so did this unqualified dev slip through your pair programming system? And if they did, doesn’t that undermine your view that the pair programming approach has dramatically improved your good hire hit rate (or, if pair programming is newly instituted in response to the unqualified dev hire, how can you claim to have an improved hit rate in only one month of use)?

It’s not necessarily undermined. Presuming the timeline is accurate: large orgs are hiring constantly and, if the person who commented is high enough in the org chart on whatever they consider to be their team, they may see a large number of hires.

Less than a month is not long enough to make the determination that your hiring process is producing better hires in my opinion (unless your old bad hires were flaming out in under a month, I guess).

> My current team scrapped the algorithmic questions entirely. We now do pair programming on a small feature in our actual codebase, with full access to Google/docs/AI. The only restriction is we watch how they work. This approach has dramatically improved our hit rate on good hires.

I've been on this gig for about 20 years, this process is by far the best one to participate in an interview both as an interviewee as an interviewer.

As an interviewer I have a lot more joy to tackle an actual task, or a problem very similar to how my team works, with someone applying for the position, it's easier for me to feel like a human instead of an assessment machine. I don't have to learn the N different potential case studies to run the interview, I don't need to feel like I'm ticking boxes when giving my feedback, it's all natural.

As a candidate it relaxes me a fucking lot, I don't have to brush off old books, go through a grind of studying and still feeling like I haven't studied enough because there's an endless amount of knowledge to learn if I need to cover all bases. I also feel a lot more like a human, I can talk through my train of thought, work as I normally would checking documentation, searching, etc.

This bizarre ritual feels almost like hazing by this point, at some point Big Tech decided this was "the way for hiring", then herd mentality took over to the point where tiny startups just cargo-culted processes without even questioning (I heard from a non-technical founder once "if Google does it then it's the best way to do it"). A few generations later of folks hired through this Byzantine process turned it into a hazing ritual, if they had to go through this pain then they might as well inflict the pain to make candidates prove themselves worthy.

It's a giant cycle of bullshit, and no matter how much I tried it's been always impossible to change the minds of HR that this is not the best way to assess candidates, the herd mentality is even stronger around there since there's always the dumb scapegoat: "we do benchmark peers in the industry and they all do this way, so we will do this way".

Sorry for the rant but it's one aspect of this job that really grinds my gears, I fucking hate running interviews with candidates because of it, I never feel I could properly measure someone's real abilities, and I really hate having to go through it myself...