The secret to being an elite 10x dev - push 1000's of lines of code, soak up the ooo's and ahhh's at the standup when management highlight your amazingly large line count, post to linkedin about how great and humble you are, then move to the next role before anyone notices you contribute nothing but garbage and some loser 0.1x dev has to spend months fixing something they could have writting in a week or two from scratch.
This has been my experience with coworkers who are big vibe coders as well. Another “sorry, big PR coming in that needs a review” and I’m gonna lose it. 50 comments later and they still don’t change.
When using agents like this, you only see a speedup because you’re offloading the time you’d spend thinking / understanding the code. If you can review code faster than you can write it, you’re cutting corners on your code reviews. Which is normally fine with humans (this is why we pay them), but not AI. Most people just code review for nitpicks anyways (rename a variable, add some white space, use map reduce instead of for each) instead of taking time to understand the change (you’ll be looking a lots of code and docs that aren’t present in the diff).
That is, unless you type really slowly - which I’ve recently discovered is actually a bottle neck for some professionals (slow typing, syntax issues, constantly checking docs, etc). I’ll add I experience this too when learning a new language and AI is immensely helpful.
You're absolutely right but I wonder if we'll have to ditch the traditional code review for something else, perhaps automated, if this agentic way continues.
> You're absolutely right
Claude! Get off HN and get back to work.
Oh my, that was unintentional. What have I become...
AI can actually review PRs decently when given enough context and detailed instructions. It doesn't eliminate the PR problem, but it can catch a lot of bugs and it can add comments to parts of the code that look questionable to instruct humans to manually verify.
You can also force the agent to write up a summary of the code change, reasoning, etc, fortunately, which can help with review.
Which industry are you in, that there is 1:1 ratio of coding to review hours?
SaaS company. And there absolutely isn’t - which is why we pay for devs. Mostly due to trust we’re able to review human PRs faster than equivalent AI PRs. Trust is worth a lot of money.
This. I‘m always amazed on how LLMs are praised for being able to churn out the large amount of code we apparently all need.
I keep wondering why. All projects I ever saw need lines of code, nuts and bolts removed instead of added. My best libraries consist of a couple of thousand lines.
LLMs are a godsend when it comes to developing things that fit into one of the tens of thousands (or however many) of templates they have memorized. For instance, a lot of modern B2B software development involves updating CRUD interfaces and APIs to data. If you already have 50 or so CRUD functions in an existing layered architecture implemented, asking an LLM to implement the 51st, given a spec, is a _huge_ time-saver. Of course, you still need to use your human brain to verify before hand that there aren't special edge cases that need to be considered. Sometimes, you can explain the edge cases to the LLM and it will do a perfect job of figuring them out (assuming you do a good job of explaining it, and it's not too complicated). And if there aren't any real edge cases to worry about, then the LLM can one-shot a perfect PR (assuming you did the work to give it the context).
Of course, there are many many other kinds of development - when developing novel low-level systems for complicated requirements, you're going to get much poorer results from an LLM, because the project won't as neatly fit in to one of the "templates" that it has memorized, and the LLM's reasoning capabilities are not yet sophisticated enough to handle arbitrary novelty.
The elephant in the room though is that the vast majority of programming fits into the template style that LLM’s are good at. That’s why so many people are afraid of it.
Yes - and I think those people need to expand their skillsets to include the things that the LLMs _cannot_ (yet) do, and/or expand their productivity by wielding the LLMs to do their work for them in a very efficient manner.
I think Steve Ballmer's quote was something like "Measure a software project's progress by increase in lines-of-code is like measuring an airplane project's progress by increase in weight."
There's a lot of smart people on HN who are good coders and would do well to stop listening to this BS.
Great engineers who pick up vibe coding without adopting the ridiculous "it's AI so it can't be better than me" attitude are the ones who are able to turn into incredibly proficient people able to move mountains in very little time.
People stuck in the "AI can only produce garbage" mindset are unknowingly saying something about themselves. AI is mainly a reflection of how you use it. It's a tool, and learning how to use that tool proficiently is part of your job.
Of course, some people have the mistaken belief that by taking the worst examples of bullshit-coding and painting all vibe coders with that same brush, they'll delay the day they lose their job a a tiny bit more. I've seen many of those takes by now. They're all blind and they get upvoted by people who either haven't had the experience (or correct setup) yet, or they're in pure denial.
The secret? The secret is that just as before you had a large amount of "bad coders", now you also have a large amount of "bad vibe coders". I don't think it's news to anyone that most people tend to be bad or mediocre at their job. And there's this mistaken thinking that the AI is the one doing the work, so the user cannot be blamed… but yes they absolutely can. The prompting & the tooling set up around the use of that tool, knowing when to use it, the active review cycle, etc - all of it is also part of the work, and if you don't know how to do it, tough.
I think one of the best skills you can have today is to be really good at "glance-reviews" in order to be able to actively review code as it's being written by AI, and be able to interrupt it when it goes sideways. This is stuff non-technical people and juniors (and even mediors) cannot do. Readers who have been in tech for 10+ years and have the capacity to do that would do better to use it than to stuff their head in the sand pretending only bad code can come out of Claude or something.