I've spent the last decade watching this arms race between interviewers and candidates. Last month I hired a senior dev who couldn't implement a basic database migration when we brought him on but aced our interview problems. Turned out he'd been using tools like this.

The problem isn't the tools - they're inevitable. The problem is that our industry clings to this bizarre ritual where we test for skills that are completely orthogonal to the actual job.

My current team scrapped the algorithmic questions entirely. We now do pair programming on a small feature in our actual codebase, with full access to Google/docs/AI. The only restriction is we watch how they work. This approach has dramatically improved our hit rate on good hires.

What I care about is: Can they reason through a problem? Do they ask good clarifying questions? Can they navigate unfamiliar code? Do they know when to use tools vs when to think?

These "invisible AI" tools aren't destroying technical interviews - they're just exposing how broken they already were.

> These "invisible AI" tools aren't destroying technical interviews - they're just exposing how broken they already were.

Whenever the topic about how broken tech interviews are has showed up on HN in the past, there were usually two crowds: the "they suck, but they're the best we've got" people, and the much less common "they suck, so we do something else" crowd. Almost everyone agreed they were broken.

What does it say about the tech industry that so many orgs continued to use a system that was known to be broken for so long? How much inefficiency and waste over the past couple of decades is attributable to bad hires? And conversely, how many efficiency improvements in the near future are going to get attributed to AI tech rather than the side effect of improved interviewing practices meant to combat AI candidates?

> What does it say about the tech industry that so many orgs continued to use a system that was known to be broken for so long?

Well, broken in what sense? In the sense that HN complains about it? Then it says nothing at all because that is not a signal of anything. HN reliably complains about ALL interview methods including coding challenges, chats about experience, take home exams, paid day-of-work, and so on.

If it is "broken" in another sense, we would have to quantify that and compare it to some alternative. I'm not aware of any good quantitative data on this sort of thing and I can imagine that the challenges to meaningfully measuring this are considerable. A qualitative survey of the tech industry is likely to surface opinions as wide ranging as HN complaints.

Basically you start with the conclusion that what we have is extremely sub optimal, but that's not clear to me. I think it is uncontroversial to assert that coding challenges are not an amazing predictor of job performance and lead to lots of false positives and negatives. Easy to find lots of agreement there. What is unclear is that there are other methods that are better at predicting job performance.

It may just be that predicting job performance is extremely difficult and that even a relatively good method for doing so is not very good.

> the "they suck, but they're the best we've got" people

I'm in this camp and I don't think it's "broken" - at least not in the sense that we have a broadly applicable "fix" that improves the situation.

I.e., everybody hates coding interviews but it's still close to the best we have got, and it beats credentialism or talking about random stuff (how are you going to compare across candidates).

> What does it say about the tech industry that so many orgs continued to use a system that was known to be broken for so long?

That they tried to scale hiring. I don't think that technical interviews are bad per se, it's the form in which they are currently that is bad. I was once applying for a company working on a stage animation software - somebody there created a task relevant to their workflow, they specified which libraries I should use and then gave me 14 days, after which I sent them a link to Github repo with the code. It was overall a pleasant experience - I learned something new programing this task, they were able to access my code style and programing abilities - but this was long before AI. They were a fairly small company though.

In the large orgs you have a HR person overseeing a hiring who doesn't know how to code (fair enough - not their job) and bunch of engineers usually far to busy with actual problems to create a problem set and far to many applicants. So companies drop leetcode at the problem - thinking that any selection criteria is better than non selection criteria.

> Almost everyone agreed they were broken.

Many? Yes. Almost everyone? No.

> The problem is that our industry clings to this bizarre ritual where we test for skills that are completely orthogonal to the actual job.

I agree. And it's not a good sign for the industry when engineers have to spend months working up to being good at answering these questions just to get the job.

When I think of the hundreds of hours I’ve devoted to learning how to solve leetcode problems that could’ve otherwise been spent on learning tools, technologies, and architecture patterns that would be useful for my actual job, I genuinely feel a deep, deep sorrow.

That trend was always comical to me. I’ve never been one to memorize documentation, specific function call orders, etc, and I remember miserably failing a coding test for Comcast. The test? Write a full contact management application on an isolated computer, no IDE, using notepad and no installed runtime.

You’re spot on with the uselessness of modern algorithmic questions. We live in a world of high level languages; in 20+ years, I’ve never implemented a single one of those coding problems in an actual codebase. I would prefer to hire an intelligent, sociable person with core skills and who has room to learn and grow, rather than someone who can ace silly algorithms but can’t work with a team or complex codebase.

Heck, the person you refer to having hired, and presumably fired, may not have even used these tools. The trend these days is to learn to the coding tests from sites we all know about, not the ideas or concepts needed to work in a real team on a real application. They spend all their time writing algorithms, but often never actually touch a database/write a real complex application. The bootcamp explosion was a self-escalating problem for algorithm-based test questions.

How did you know he used tools like this one? You sit him down and interrogate him? Or did they own up to it? What was the result? Did you cut them loose?

When did you switch to pair programming? Your post starts by stating you just hired the unqualified dev last month, so did this unqualified dev slip through your pair programming system? And if they did, doesn’t that undermine your view that the pair programming approach has dramatically improved your good hire hit rate (or, if pair programming is newly instituted in response to the unqualified dev hire, how can you claim to have an improved hit rate in only one month of use)?

It’s not necessarily undermined. Presuming the timeline is accurate: large orgs are hiring constantly and, if the person who commented is high enough in the org chart on whatever they consider to be their team, they may see a large number of hires.

Less than a month is not long enough to make the determination that your hiring process is producing better hires in my opinion (unless your old bad hires were flaming out in under a month, I guess).

> My current team scrapped the algorithmic questions entirely. We now do pair programming on a small feature in our actual codebase, with full access to Google/docs/AI. The only restriction is we watch how they work. This approach has dramatically improved our hit rate on good hires.

I've been on this gig for about 20 years, this process is by far the best one to participate in an interview both as an interviewee as an interviewer.

As an interviewer I have a lot more joy to tackle an actual task, or a problem very similar to how my team works, with someone applying for the position, it's easier for me to feel like a human instead of an assessment machine. I don't have to learn the N different potential case studies to run the interview, I don't need to feel like I'm ticking boxes when giving my feedback, it's all natural.

As a candidate it relaxes me a fucking lot, I don't have to brush off old books, go through a grind of studying and still feeling like I haven't studied enough because there's an endless amount of knowledge to learn if I need to cover all bases. I also feel a lot more like a human, I can talk through my train of thought, work as I normally would checking documentation, searching, etc.

This bizarre ritual feels almost like hazing by this point, at some point Big Tech decided this was "the way for hiring", then herd mentality took over to the point where tiny startups just cargo-culted processes without even questioning (I heard from a non-technical founder once "if Google does it then it's the best way to do it"). A few generations later of folks hired through this Byzantine process turned it into a hazing ritual, if they had to go through this pain then they might as well inflict the pain to make candidates prove themselves worthy.

It's a giant cycle of bullshit, and no matter how much I tried it's been always impossible to change the minds of HR that this is not the best way to assess candidates, the herd mentality is even stronger around there since there's always the dumb scapegoat: "we do benchmark peers in the industry and they all do this way, so we will do this way".

Sorry for the rant but it's one aspect of this job that really grinds my gears, I fucking hate running interviews with candidates because of it, I never feel I could properly measure someone's real abilities, and I really hate having to go through it myself...

My approach to technical interviews is just to talk shop with the candidate for an hour.

Throughout the conversation, we mostly stay light and touch on a lot of different topics. But every so often, I’ll drill in and start discussing some random topic at depth. If you drill in just 2-3 times throughout the interview, you get a pretty clear picture of the candidates average depth of knowledge.

Not only is this LLM proof, but you also get a sense of their opinions, their interests, their passion, etc.

It’s a major improvement but you still want to be careful on what you consider common knowledge. There’s a lot of breadth to software engineering and there can be whole areas someone is missing but can learn.

For example I was always a great employee but early in my career I wasn’t big on unit testing.

Or I interviewed for an ML job and they dinged me for not knowing a bunch of statistics things off the top of my head.

> I was always a great employee but early in my career I wasn’t big on unit testing

Unit testing does have drawbacks though, so as long as you could explain why you aren't a huge fan of it, I don't see why this would be a disqualification

This is a great approach and I like it as well. I did have a few situations where the person could talk shop like an expert but when it came to actually writing some code they failed. I literally had a person fail fizzbuzz that was supposedly senior and talked shop really well.

As a senior developer I had to search for "wtf is fizzbuzz". How many senior developers spend their time solving these kinds of problems?

I’d really want to dig into that. Maybe they were nervous? I’ve gotten so nervous in some situations I’ve forgotten the next sentence was going to say. I wonder if it’s an issue like that. Your mind just blanks.

This doesn't scale, and opens up legal liability to unfair hiring accusations.

So, am I missing something? Is this just a tool allowing job candidates to commit fraud?

I'm no lawyer, so I'm not sure this would rise to the level of actual legal fraud, but moral fraud at the very least is what I think I'm seeing.

EDIT:

Now I see it at the bottom of the page... "Interview Coder is a desktop app designed to help job seekers ace technical interviews by providing real-time assistance with coding questions."

So yes, it's exactly what it looks like.

Is it fraud when they tell you you’ll spend most of your time coding cool new products but then you mostly do customer support and meetings?

Intentional misrepresentation by any party is exactly that: a fraud. Morally at the least and possibly legally if the circumstances meet the legal standard.

The important thing is that such judgement doesn't depend on the integrity of the other party: the act stands on its own, independent of the intentions of the other party.

[deleted]

Or maybe it's defensive against an industry who has developed ridiculous hiring practices.

No: it's a fraud be that legal or moral.

If you want a job that someone is offering and they ask you for irrational, unreasonable, or just stupid demonstrations as a prerequisite for getting hired, you have one choice: decide for yourself the show isn't worth the price of admission and walk away... or you do the irrational, unreasonable, or just stupid thing asked to the best of your ability and keep your hat in the ring for the gig. Either way honesty and ethics dictate that you either play the game or walk away.

The moment you cheat and lie: that's entirely on you and perhaps your own dumbass decision to train and enter an industry that works this way. Of course, I mean "you" in the abstract, not necessarily you personally.

Everybody cheats and lies about things. Those that say they don't, well I give you a liar.

People who lie and cheat often justify themselves by asserting that everyone does. It’s easier than admitting you’re a liar and a cheat and changing your ways.

This is pure coping mechanism to explain to oneself that fraud is ok. Talk or read interviews with people sentences for fraud, they will all explain to you why it was in fact morally ok for them to do what they did.

And you can say the same for employers that exploit their workers. No one is the villain in their own story.

I agree, but I don't see the connection. Asking people to do unreasonable hiring exercises is not exploiting.

If their hiring practice is that ridiculous, that means the company has been warped by an unreasonable amount of bureaucracy. Why would anyone want to work for such a company anyway?

Needing money and not finding a preferable job usually.

> a tool allowing job candidates to commit fraud

It's not fraud.

The candidate could also use this tool to help with the job once they've been hired, and that would not be fraud either.

They wouldn't, but that's just because the interview is stupid and not like real work.

Pretty much every company I've applied to or worked at commits both legal and moral fraud against its employees and applicants, on an industrial scale. They've also created a broken interview process that punishes honesty and forces applicants to find creative solutions. So personally I see no moral fraud here.

Really... so two "wrongs" really do make a "right".

So when an employer sees many employees and candidates work the system like this, then they are also right to say, "pretty much every employee we've hired or candidate we've spoken with tries to dishonestly game the system and deals with us in bad faith, so we're justified in screwing with them any way we please. No moral fraud here: we're just doing onto those as those would do onto us."

Well, great... you've defined a level playing field that's working as optimally as it can and without any moral blame at all.

Oh, I see the problem. You've aimed your LinkedIn bot at Hacker News. You should really sort that out.

It is okay to cheat when the game is rigged. At least this is what star trek has taught me (Kobayashi Maru).

You're not wrong, but you're overlooking the cultural normalcy. When you have something to sell, you're allowed to lie until you're blue in the face and nobody even blinks. But when applicants merely demonstrate how well they understood and internalized that state of affairs, it's fraud. None of this is OK in my book but it's hypocritical to single out job applicants when the whole culture is like that.

1. That still doesn't make it okay.

2. No, you're not allowed to lie until blue in the face when selling; https://en.wikipedia.org/wiki/False_advertising is illegal. (Is it underunforced? Probably. Do I wish those laws had much stronger restrictions and harsher penalties? Yep. But is it illegal to lie to sell things? Still yes.)

(IANAL and this isn't legal advice. Though it is moral advice.)

[deleted]

I'm responding to a post on Hacker News, not writing the complete history of morality and culture.

I agree that lying and fraud by businesses, employees, buyers, and sellers are all reprehensible. Nonetheless, I would contend there's no problem discussing the actual subject at hand without expanding it.

I'm not being hypocritical, I'm simply being topical.

It’s really simple. Don't lie or cheat and don’t abide those who do.

As a software development manager, I find the most important quality I need in my direct reports is honesty. If you are not honest with me it makes it very difficult to do my job.

That some developers have been conditioned to dishonesty is a shame on our industry.

In most societies in the world today you must have a job to survive and to support the survival of one’s family. Imho it is not morally wrong to do anything you need to do to achieve gainful employment so that you and your family can survive, and I would go so far as to say it is immoral to be scolding people engaged in a fight for survival that they aren’t doing it properly.

I'm seeing a lot of justification for this tool (on the tool's page and in the comments here) based on the "LeetCode is bad, companies shouldn't test for orthogonal skills".

While I agree with that sentence broadly, tools like this undermine the process even for non-orthogonal skills. For instance, we administer System Design interviews and Practical Coding interviews (usually, we give the candidate a code base and ask them to make a modification to it)—things that are not LeetCode and are pretty relevant to day-to-day work. We actually let candidates use AI, as long as they show how they're using it. Tools like still undermine our process even for those types of interviews.

I'm a realist and understand that tools like this are inevitable. But I don't think they're ethical, and I think the "Fuck Leetcode" argument justifies their existence. In general, trickery is wrong (whether it's companies doing it, or candidates).

> System Design interviews

System Design interviews can be crammed too.

I'd find it much better if people would ask you "tell me about some complicated systems you helped build and what high-level challenged you encountered". Instead you get "how would you design google docs?".

This is why we have moved all interview loops to in-person. I highly recommend everyone who is a hiring manager do 100% of their loops in person. Granted, the coding section isn't 100% of the interview, but it's very important.

Does “in-person” mean flying in the candidate, and performing the interview on-site?

Does “100% of their loops” mean everything after the initial contact?

I suspect smaller companies could find this challenging.

I’m old enough to remember when independent recruiters acted almost like “talent agents,” with large payouts.

This encouraged them to curate a “personal brand,” and they would often do a lot of the vetting, themselves.

I believe that executive recruiters still operate that way, but engineering recruiters seem to have sharply declined, since those days.

In person means everything past the initial tech / recruiter screen, which is a very low bar just to see if we think it’s worth flying them out. The alternative is you will hire people and you can’t know if they fraudulently interviewed using things like the tools above, or even if the person who interviewed is the same person who applied and will show up on day one.

For smaller companies in particular the above can be catastrophic, so I recommend they adapt to changing times.

Great, I cant wait to go back to onsite interviews where I have to spend an entire day (at least) getting to some random office and sitting in an uncomfortable chair to do my on-sites

Don't forget about implementing quicksort on a whiteboard.

Rather than banning AI in technical interviews, better to see how the candidates use it and if they can comprehend what the LLM is saying, the quality of their prompts and own thinking.

I believe people who are using these AI tools to pass interviews wont be able to use AI in their real job in a net positive manner.

I recently hired two engineers that were good at clearing the interview rounds using AI -- I knew because I encouraged them to use AI.

But when it came to large complex codebase or problems that required critical thinking everything fell apart.

I couldn't agree more. LLMs are legitimate tools and, ideally, I want to see how effective a candidate is in using their available tools to solve complex problems.

The service on offer here is different. It's providing a means to use LLMs to fake your way through a technical interview.

Showing that you can use LLMs to quickly and correctly solve problems is a good skill to have. Offering up a solution from an LLM as your own work without acknowledging how you got there is just misrepresentation... or to put it another way is just lying. Maybe fake your degrees and experience while you're at it, right?

At least in the long run, many that need these tools to get in will be found out once they start having to solve real problems on the job. Just a shame about other, more qualified people being turned away. Of course if the LLM was sufficient enough on its own, perhaps a real software developer was never required to begin with.

Interesting approach. The effectiveness of any AI, especially in nuanced scenarios like interviews, hinges on how well its underlying knowledge is structured. For an 'invisible AI interviewer' to ask relevant, probing questions, it needs more than just data—it requires a structured understanding of the domain.

I've found that applying MECE principles (Mutually Exclusive, Collectively Exhaustive) to knowledge domains dramatically improves AI performance in complex tasks. It ensures comprehensive coverage without redundancy, allowing the AI to navigate concepts more effectively. This seems particularly relevant for assessing candidate depth versus breadth.

I interview people regularly. These are easy to detect… not in a direct way but i can tell when you are being assisted by AI. 3 so far this month out of 12, for those who were going to ask how frequently.

The industry seems so divided on AI right now.

We have interviews where we aren't allowing the use of it (yet interviewees are using stealth AIs to cheat). At the same time, I am also hearing of organizations mandating the use of it, ie: "20% of the code committed needs to be generated". There's probably a set of orgs that exist that do not allow the use of AI in coding interviews, yet practically mandate the use of AI in day-to-day work!

We are at an inflection point I think, but my guess is AI is going to win out soon enough.

I think this is a cool idea:

It’s a platform where referrals can register and then they put some money on the line. Say $20.

When an employer calls my referrals through the platform if I end up getting fired in the first six month the people that referred me lose their money (and reputation). If I stay on they get paid that amount of money.

Feel free to tweak the idea but I think it would be great to hire based on referrals in a trustable way.

Something very like that was a Launch HN in, I think, 2021. "Skip the Interview" was the name, I believe. But no doubt with that example to learn from it will go at least a little better this time.

And why would I pay to refer someone? If a company suggested that I'd assume it was a scam, and even if they were well established I don't see an incentive that isn't worse.

I've simply tell candidates to use AI as part of the interview process now. It functionally changes nothing about the evaluation.

What is your interview process then?

The exact same as normal.

Even before AI you would have candidates of varying skill level so your coding questions should have always scaled depending on the skill of the candidate.

The purpose is not to check if you've memorized some algorithms - it's to verify that you're capable of mentally constructing the model of a problem in your head, thinking through it in a structured way, etc.

Giving a candidate access to AI doesn't eliminate the need to do that.

[deleted]

Great, another paid tool to cheat at coding interviews. I guess the future is coming back to on-site interviews only.

[deleted]

Fraud seems like something to be proud of anymore.

Have you seen who's in charge?

Related:

The Leader of the LeetCode Rebellion: An Interview with Roy Lee (70 points, 9 days ago, 44 comments) https://news.ycombinator.com/item?id=43497848

I got kicked out of Columbia for taking a stand against LeetCode interviews (20 points, 9 days ago, 18 comments) https://news.ycombinator.com/item?id=43497652

This only results in that technical interviews won't be done remote or as homework in the future. Even before covid i wouldn't have recommended this remote interview approach. The by far best results in interviewing were a technical talk-through over their past experiences or some short pair-developer task (mob programming or refinement) were they can use whatever tool they want if they lack experience to talk about - i wanna see how they tackle real problems by asking good questions. Hardly to fake even with advanced ai tools if the interviewer is a very experienced engineer.

So at our company, we stopped asking algorithm questions in interviews.

Instead, our process starts with a one-hour technical conversation. We talk through the candidate's experience, how they think about systems and products, and dig into technical topics relevant to our stack (Ruby on Rails). This includes things like API design, ActiveRecord, SQL, caching, and security.

If that goes well, the next step is a collaborative pull request review. We have a test project with a few PRs, and we walk through one with the candidate. We give them context on the project, then ask for their feedback. We're looking for how they communicate, whether they spot design issues (like overly long parameter lists or complex functions), and how they reason about potential bugs.

This has worked really well for us. We've built a strong, pragmatic engineering team. Unfortunately though, none of us now remember how to invert a binary tree without Googling it..

I think if I were hiring remotely right now I’d look to create exercises that could be done “open book” using AI, but that I’d validated against current models as something they don’t do very well on their own. There are still tons of areas where the training data is thinner or very outdated, and there’s plenty of signal in seeing whether someone can work through that on their own and fix the things the LLM does wrong, or if they’ve entirely outsourced their problem solving ability.

How do you verify this when AIs are not idempotent?

When doing a tech interview, watch the person’s eyes. Pay attention to the pacing of their answers.

If they seem to be reading intently, that’s a flag. If their answers are fluffy and vague and then get very specific, that’s a flag.

Tools like this might not show up on shared screens, but people who use them behave unnaturally. It’s pretty obvious if you know what to look for.

I’ve been doing dozens of technical interviews per month and it’s pretty clear when the person is Googling answers or using some ai tool.

Seems like a whole new market is opening for people looking to game the hiring process. In my short few years being involved in interviewing, I've seen 1) obvious AI use off screen/a second person feeding answers 2) Person A showing up for the interview process and Person B showing up after being hired 3) candidates covering their lips moving with a large headset mic and someone else speaking for them.

Wild

If the problems are tailored to the role and the job requirements can be completed using AI, isn’t this sort of the correct outcome?

If you have job requirements that extend beyond “trivially completable with AI” ask questions that aren’t trivially completable with AI.

The role invariably involves things the candidates don't know coming in. Otherwise we'd be filtering candidates based on familiarity with specific technologies, which is bad for everyone. That's the purpose of these algorithmic questions; they are a generic test of competency.

Are they? Most people will end up using Google or AI or whatever other tool is available to do the job. The tests aren’t for for purpose and that’s the root of the problem, IMO.

These people will then submit PRs with broken code they don't understand, as I have witnessed. You don't know anything about a candidate if you merely witness them repeating what the AI said.

You also don’t know much about them if you rely on online leetcode quizzes, even without AI. That’s the problem. That candidates are using AI is the expected outcome of the enshitification of the interview process.

I wish hiring was: pull a ticket out of your system and work through it with the interviewee.

They could get a sense of what type of work they’d be doing and the competence of the organization and you’d get to see how they perform in the real world.

It would take a very senior coder to be able to make sense of the code base and start fixing a bug in the duration of an interview.

Plus the code base will involve technologies they are not familiar with; rarely does everyone know every technology needed beforehand.

To clarify I’d intend the interviewer to do most of work that requires inside knowledge.

The interviewee could suggest the specific implementations within that.

Eg. We need to hook into our fubu system. Here’s our library for that. How would you code that?

[deleted]

If you don't want install a binary I found way cheaper option for an interview assistant: https://interview.sh

Getting Amazon and k8s certifications will mean nothing with tools like this

This is the funniest thing I have seen all week. Burn the process to the ground. Hahaha.

Can we just stop doing performative technical interviews already? The only other industry that does interviews the way we do them for senior people is the performing arts and performing is right in the name.

The engineers I’ve worked with who’ve caused the most damage by far, have have been technical tornadoes who did fine in interviews.

I’ve never seen any damage caused by someone who slipped through the cracks without being able to code at all (and I’ve worked with people like that).

I think we’d be much better off if we just fired people who outright lie about their ability to code and spent more time digging into previous employment history, talking though projects, and talking to old coworkers.

The fact that AMZN still relies on outdated methodology, such as leetcode problems, to assess _senior_ candidates shows the company is out of touch.

Good. This destroys a system that is already broken and renders Leetcode and others useless for evaluating candidates in the AI era.

The most ironic thing is that this qualifies for "hacking a system to your advantage" for Y Combinator.

Those that are upset by Interview Coder as 'cheating' are themselves bounded by an outdated system waiting to be disrupted and Interview Coder is the result of that.

The only way to stop this and is to return to onsite interviews. Which reduces cheaters to >98%.

This spells the definitive end of Leetcode and the rest of the online assessment tools.

Some people here seem to believe this will make leetcode obsolete. I'm afraid it will just make cheatting a status quo - if everyone else aces the coding part of the interview because of some AI tool, then you are at a severe disadvantage if you try to play by the rules. From the perspective of the hiring manager it'll look like the coding questions are too easy, since everyone gets them right, except that one experienced person for some reason.

Hiring managers quickly realize that the cheaters don't actually know anything themselves, so the question is how they can combat cheating.

What will happen is that they will look for cheatproof solutions, such as testing on site, and filtering by credentials. Is that really what candidates want?

On-site and leetcode aren't mutually exclusive. We'll simply go back to in-person leetcode questions which will be equally shit.

Why would it equally be shit? It's like doing a final exam , you know what you know and and you don't know what you don't know , no hacks or gimmicks can save you.

And while yes they should also test for stack related and other technical aspects and not just Algorithmic leetcode esque knowledge , thats more of a reflection on the test.

> We'll simply go back to in-person leetcode questions which will be equally shit.

Great idea! That is even better.

It's not for leetcode. It's for people who need AI to help them solve FizzBuzz, because they are struggling. (Also, pro tip: AI won't help you here anyways.)

this is great because it's just going to bring back in office interviews and hopefully any tests are in person as well

Great for whom? Travel agents?

people who value quality code bases and quality candidates

And since the cost of hiring goes up, they'll just be more stringent about whom they invite in the first place. I hope you've worked at a few FAANGs before applying!

This is brilliant and necessary, Leetcode needs to die and a means to an end in the age of AI.

For our business we don't use Leetcode, the future looks something like paid bounties and in person interviews.

tinycorp from George Hotz does the very same thing of paid bounties to get hired there.

The highly talented people will do this for fun, while those who aren't will self select themselves out.

(0) https://tinygrad.org/#worktiny

> The highly talented people will do this for fun

No?

People who have time AND enjoy doing this sort of thing in their free time will do this for fun. That’s the self-selection, then from this pool talent hopefully gets translated into results.

Why reduce your pool to “free time and enjoy doing bounties in free time”? That’s excluding many talented people. I’ll also point out that it’s discriminatory: single childless wealthy men tend to have a whole lot more free time (for example women do most of the unpaid care work in all countries, leaving a whole lot less time for this sort of thing).

I also have a suspicion (not based on any data) that people who enjoy doing bounties in their free time certainly tend to be technically talented, but also tend to have non-technical weaknesses around communication and other soft skills. So you’d self select for this weakness too.

>Leetcode needs to die and a means to an end in the age of AI.

This is just, like, your opinion. Your future employer may think otherwise, and look for people with algorithmic skills. "But leetcode is actually evil" is just your rationalisation of your cheating.

Leetcode interviews only became popular because FAANG needed a somewhat objective way to weed out large quantities of applicants in an initial round. In this context, and as part of a broader interview process, it somewhat makes sense.

But then of course, since FAANG did it, everyone else jumped on the leetcode bandwagon and started asking ridiculous DSA-exam-type questions that had nothing to do with their actual work, even if they had the capacity to conduct proper interviews for their candidate volumes.

leetcode is a bastard. I fucking hate it with a passion.

Its almost useless as a way to learn how to be a better coder, as most of the "fastest" answers are unreadable.

But if you are using it as a basis for interviews, you are more likely to bump into someone who has trained on that particular question.

I'm not sure what the answer is, as other said, pair programming is kinda the answer. Maybe debugging something in your code base.

I guess my experience with different, because I never had to grind leetcode. I had some basic algorithmic lesson at my University (and a short adventure with competitive coding) but that's all. I never had a technical interviews where that was a problem - either there was no typical coding question, or a simple sanity check exercise. Instead we discussed some problems and thinks related to the job. I understand my experience is not typical - partially maybe I'm currently in the field of it security - but that still doesn't justify participating in the broken process with tools like this. If a company hiring process is broken just... walk away? Let them burn with leetcode grinders with no real experience that they'll finally hire.

I think because google et al started doing coding tests like this, everyone else does.

> This is just, like, your opinion

No, also mine.

The majority of this industry thinks leetcode is shit. It's some skill for sure, just not such an important skill that it becomes the de facto key test for software engineers.

Who said leetcode was "evil"?

I am an employer and would much rather have in person interviews than leetcode.

It doesn't test for anything that AI can do already if not faster otherwise.

[deleted]

I like leetcode easys for job interviews because it shows you know how to actually code. Many applicants cant even do fizz buzz.

And now they can do fizz buzz with AI, Cursor, Copilot, etc.

If you want to filter out these candidates, bounties are the way here and in person interviews.

Leetcode can't help you here.

Maybe an easy leetcode with a twist, also if you throw some LLM confusing text on the description I guarantee nobody will take it out before giving it to the LLM

But yeah last time anyone proposed me to do fizzbuzz I had to be very polite and tell them not to take a hike

I wouldn't want someone in my code base vibe coding features with AI, who doesn't know how to do fizz buzz.

> And now they can do fizz buzz with AI, Cursor, Copilot, etc.

No, they can't. These are the people who don't know the difference between a variable and a function call or what a module is. (90 percent of applicants.)

AI can't help them here. Even if they can copy-paste an AI response they still don't have the vocabulary to explain what they're copy-pasting even in most basic terms.

> No, they can't. These are the people who don't know the difference between a variable and a function call or what a module is. (90 percent of applicants.)

Yes they can and you don't know if they don't know that.

You can't stop them from doing this.

The only way to stop them is to do in person, bounties and asking about their real world experience.

The problem with people that think they're too smart is that they usually lack theory of mind

This crap is transparent. A person actually solving the problem won't type that fast or not think silently about it. And even better, the LLMs usually get stuck in a snag on some of the problems, or they write weird snippets

(though this might bring down the number of interviewers asking for leetcode)