This is a super tough, but incredibly valuable thread. Thank you all for the raw feedback.
I need to be clear: I 100% agree with the core sentiment here. As a candidate, the hiring process is often broken, dehumanising, and feels like a one-way street. Many of you are right when saying a tool like this could be abused.
I'm not trying to automate the human part of hiring. I'm trying to fix the part that's already broken.
The real-world alternative at most companies isn't a friendly 1-hour chat with a senior engineer for every single applicant, that just doesn’t scale.
The alternative is:
1. A harsh, biased CV filter that rejects 95% of applicants in a couple of seconds.
2. A 4-hour take-home exercise that massively wastes your time and is genuinely pointless because anyone can vibe code it.
3. An algorithm test from a platform like HackerRank for which the majority of engineers have to prep many hours.
I built Niju to be less painful than those. It's a 20-minute, practical, "think-aloud" test. The AI's only job is to summarise the data so a human can review it faster, making it more likely they'll widen the funnel and give more people a shot beyond just their CV.
My goal isn't to replace engineers but to stop wasting their time on a broken process, so they can have better, human interviews with the top 20% of candidates.
It's a massive challenge, and this thread, as well as most of the others, show the raw nerve I've hit.
> I'm trying to fix the part that's already broken.
The broken part appears to be that somebody spends 20 minutes screening a candidate rather than 5 minutes reviewing an AI summary of the screen. Inefficient, maybe.
Why does this actually get rid of the alternatives? You've reduced the active time to screen a candidate by 75%. But a CV filter reduces it by 95%. Why would your system involve less prep than HackerRank? It is still a test with a critical outcome. The idea that if only we didn't ask algorithmic questions that nobody would spend time rehearsing for interviews is ridiculous.