A lot depends on your goals for your code reviews. And your goals might even be different for different parts of the code base.

- Are you trying to make sure that more than one human has seen the code? Then simply reading through a PR in 10 minutes and replying with either a LGTM or a polite version of WTF can be fine. This works if you have a team with good taste and a lot of cleanly isolated modules implementing clear APIs. The worst damage is that one module might occasionally be a bit marginal, but that can be an acceptable tradeoff in large projects.

- Does every single change need to be thoroughly discussed? Then you may want up-front design discussions, pairing, illustrated design docs, and extremely close reviews (not just of the diffs, but also re-reviewing the entire module with the changes in context). You may even want the PR author to present their code and walk throuh it with one or more people. This can be appropriate for the key system "core" that shapes everything else in the system.

- Is half your code written by an AI that doesn't understand the big picture, that doesn't really understand large-scale maintainability, and that cuts corners and _knowingly_ violates your written policy and best practices? Then honestly you're probably headed for tech debt hell on the express train unless your team is willing to watch the AI like hawks. Even one clueless person allowing the AI to spew subtlety broken code could create a mess that no number of reviewers could easily undo. In which case, uh, maybe keep everything under 5,000 lines and burn it all down regularly, or something?

I think the other thing that often muddies the waters in discussions of code review is that open source projects and internal codebases are generally in rather different situations. An internal codebase is usually worked on by a fairly small group of experienced people, who are both creating and also reviewing PRs for it. So:

- the baseline "can I assume this person knows what they're doing?" level is higher

- making the "create PR" process take longer in order to make the review process faster is only a tradeoff of the time within the team

- if something is wrong with the committed code, the person who wrote it is going to be around to fix it

But in open source projects, there are much more often contributions by people outside the "core" long-term development team, where:

- you can't make the same assumptions that the contributor is familiar with the codebase, so you need to give things extra scrutiny

- there are often many fewer people doing the code review than there are submitting changes, so a process that requires more effort from the submitter in order to make the reviewer's job easier makes sense

- if there's a problem with the code, there's no guarantee that the submitter will be available or interested in fixing it once it's got upstream, so it's more important to catch subtle problems up front

and these tend to mean that the code-review process is tilted more towards "make it easy for reviewers, even if that requires more work from the submitter".

> - if there's a problem with the code, there's no guarantee that the submitter will be available or interested in fixing it once it's got upstream, so it's more important to catch subtle problems up front

It's also more important to have good tools to analyze subtle problems down the line, thus increasing the importance of bisection and good commit messages.

An underrated benefit of "make it easy for reviewers" is that when a bug is found, everybody becomes a potential reviewer. Thus the benefit does not finish when the PR is merged.