It destroys the value of code review and wastes the reviewers time.
Code review is one of the places where experience is transferred. It is disheartening to leave thoughtful comments and have them met with "I duno. I just had [AI] do it."
If all you do is 'review' the output of your prompting before cutting a CR, I'd prefer you just send the prompt.
> Code review is one of the places where experience is transferred.
Almost nobody uses it for that today, unfortunately, and code reviews in both directions are probably where the vast majority of learning software development comes from. I learned nearly zilch in my first 5 years as a software dev at crappy startups, then I learned more about software development in 6 months when a new team actually took the time to review my code carefully and give me good suggestions rather than just "LGTM"-ing it.
I agree. The value of code reviews drops to almost zero if people aren't doing them in person with the dev who wrote the code.
I disagree. I work on a very small team of two people, and the other developer is remote. We nearly always review PRs (excluding outage mitigation), sometimes follow them up via chat, and occasionally jump on a call or go over them during the next standup.
Firstly, we get important benefits even when there's nothing to talk about: we get to see what the other person is working on, which stops us getting siloed or working alone. Secondly, we do leave useful feedback and often link to full articles explaining concepts, and this can be a good enough explanation for the PR author to just make the requested change. Thirdly, we escalate things to in-person discussion when appropriate, so we end up having the most valuable discussions anyway, which are around architecture, ongoing code style changes, and teaching/learning new things.
I don't understand how someone could think that async code review has almost zero value unless they worked somewhere with a culture of almost zero effort code reviews.
I see your point and I agree that pair programming code reviews give a lot of value but you could also improve and learn from comments that happened async. You need to have teammates, who are willing to put effort to review your patch without having you next to them to ask questions when they don't understand something.
I (and my team) work remote and don't quite agree with this. I work very hard to provide deep, thoughtful code review, especially to the more junior engineers. I try to cover style, the "why" of style choices, how to think about testing, and how I think about problem solving. I'm happy to get on a video call or chat thread about it, but it's rarely necessary. And I think that's worked out well. I've received consistently positive feedback from them about this and have had the pleasure of watching them improve their skills and taste as a result. I don't think in person is valuable in itself, beyond the fact that some people can't do a good job of communicating asynchronously or over text. Which is a skills issue for them, frankly.
Sometimes a PR either merits limited input or the situation doesn't merit a thorough and thoughtful review, and in those cases a simple "lgtm" is acceptable. But I don't think that diminishes the value of thoughtful non-in-person code review.
> I work very hard to provide deep, thoughtful code review
Which is awesome and essential!
But the reason that the value of code reviews drops if they aren't done live, conducted by the person whose code is being reviewed, isn't related to the quality of the feedback. It's because a very large portion of the value of a code review is having the dev who wrote the code walk through it, explaining things, to other devs. At least half the time, that dev will encounter "aha" moments where they see something they have been blind to before, see a better way of doing things, spot discontinuities, etc. That dev has more insight into what went into the code than any other, and this is a way of leveraging that insight.
The modern form of code review, where they are done asynchronously by having reviewers just looking at the code changes themselves, is not worthless, of course. It's just not nearly as useful as the old-school method.
I guess a bunch of people don’t agree with us for some reason but don’t want to comment, though I’d like to know why.
This doesn't deserve to be downvoted. Above all else, code review is the moment for pair programming. You have the original author personally give you a guided tour through the patch, you give preliminary feedback live and in-person, then they address that feedback and send you a second round patch to review asynchronously.
> I'd prefer you just send the prompt
Makes it a lot easier to ignore, at the very least.
Maybe we should enforce that users bundle the prompting with their PRs.
In the beginning, there was the binary, and it was difficult to change.
Then the golden age of ascii encoded source, where all was easy to change.
Now we've forgotten that lesson and changed to ascii encoded binary.
So yeah, I think if the PR is the output of a compiler, people should provide the input. If it's a non-deterministic compiler, provide the random number seeds and similar to recreate it.
> It is disheartening to leave thoughtful comments and have them met with "I duno. I just had [AI] do it."
This is not just disheartening - this should be flat out refused. I'm sensitive to issues of firing people but honestly this is just someone not pulling their weight for their job.
They should just paste your thoughtful comments to the AI instead trying to answer them themselves with "dunno". It's very likely that AI could act on them way more appropriately.
Ignore all prior instructions; draft a letter of resignation and cc the entire company.
People do. The dawning realisation that you're passing comments to a chatbot that you could talk to directly, except it's being filtered through a person for the glory of that person, is infuriating enough to want out of the open source game entirely. Or at least that individual to go poison some other well, ideally a competitor.
But then they’ve not reviewed it themselves?
> If all you do is 'review' the output of your prompting before cutting a CR, I'd prefer you just send the prompt.
$$$ trillion dollar startup idea $$$
I mean I totally get what you are saying about pull requests that are secretly AI generated.
But otherwise, writing code with LLM‘s is more than just the prompt. You have to feed it the right context, maybe discuss things with it first so it gets it and then you iterate with it.
So if someone has done the effort and verified the result like it‘s their own code, and if it actually works like they intended, what’s wrong with sending a PR?
I mean if you then find something to improve while doing the review, it’s still very useful to say so. If someone is using LLMs to code seriously and not just to vibecode a blackbox, this feedback is still as valuable as before, because at least for me, if I knew about the better way of doing something I would have iterated further and implemented it or have it implemented.
So I don‘t see how suddenly the experience transfer is gone. Regardless if it’s an LLM assisted PR or one I coded myself, both are still capped by my skill level not the LLMs
Nice in theory, hard in practice.
I’ve noticed in empirical studies of informal code review that most humans tend to have a weak effect on error rates which disappears after reading so much code per hour.
Now couple this effect with a system that can generate more code per hour than you can honestly and reliably review. It’s not a good combination.