> We can’t change the fact that live coding is a common practice in tech interviews. But we can try to mitigate the stress it causes.

Yes, we can. Don't do them. But, we have to replace them with something that works. That means none of these poorly constructed take home projects that are almost universally either drastically over scoped, criminally under specified, or both.

> we have to replace them with something that works

We don't. The simple solution is to stop maintaining the illusion that 100% perfect hire is possible.

Design your post-hire process around imperfect hiring rate / quick feedback loop, and accept the losses (they will happen anyway despite any perfectionistic illusions you choose to maintain).

These are few questions that really matter:

   - Is there a track record of delivering meaningful results?
   - Does their past experience seem credible?
   - Are their soft skills and communication skills up to your expectations?
   - Do they demonstrate adequate hard skills for the role?
   - What interests and motivates them on professional and personal level?
Your interview process will always be just an attempt at answering these somewhat accurately, with diminishing returns after a certain point. Getting actual accurate answer to these is only possible through collaborative work in real environment.

I was about to suggest the problem with that is that applicants may think they meet your standards, and then be fired, but then realised that of course that very few coding interviews measure their skills to a sufficient standard to prevent that anyway.

I'm gonna stop you right here, because I never said any such thing:

> We don't. The simple solution is to stop maintaining the illusion that 100% perfect hire is possible.

Take-homes suck on both sides nowadays - if you're the interviewer, how do you know they didn't just plug it into chatGPT and spend the rest of the afternoon studying the solution?

I just don't know what a better proxy of coding ability even is, when we exclude options that can't be gamed/cheated.

> how do you know they didn't just plug it into chatGPT

If someone can present a good solution and talk about the reasoning behind it with enough detail & savvy to convince you that they actually wrote it, does it matter if they didn't?

The logical conclusion of this line of thought is to just outsource to the cheapest foreign firm who can operate a chatbot.

That presumes that someone using a chatbot would necessarily generate a high quality solution and be able to explain the underlying reasoning.

And that is my point: there's a difference between someone pushing forth slop, and someone who is simply not doing the actual labor but could if they wanted to do so.

What’s wrong with it if they studied the codeGPT solution well enough to explain it, answer the questions about corner cases and suggest improvements? Won’t it be a good indicator of the candidates skills? ChatGPT is one of the daily tools nowadays, we should not ban it, but the one using it should be able to understand and explain the code, and his logic, and explain how he architected the solution and how the LlM assisted him, and where it worked well and where not so good.

Probably becasue you are looking for people who can actually perform a certain job and not just come back with the ChatGPT answer.

If they can produce working code that solves the problem, and explain how it works, that is more than “just com[ing] back with the ChatGPT answer.” I'm not saying ChatGPT doesn't have its own issues, but this is not one of them.

I've had candidates who successfully did this to explain how a SQL JOIN works. But I'm not looking for candidates who can read a GPT prompt; I'm looking for people who deeply understand how a join works.

I used to be a fan of take-homes, but they have gotten ridiculous. Most importantly, many companies don't even follow up on them! It used to be fairly common etiquette that if you asked somebody to spend a day writing code for you, you at least had the decency to give them feed back.

Well, and as a candidate, while you're going to do some homework on the company in any case--any real take-home you know you're competing against people who are going to take a weekend or more with it whatever the instructions are.

Seems like now adays theyd just hook you up to whatever AI they claim to use and your job is to tell them why the AIs code would fail real world tests.

The catch would be not knowing whether the interviewee has the AI cargo cult PRIORS