those AI checks, if you insist in getting them, should be part of your pre-commit, not part of your PR review flow. they are at best (if they even reach this level) as good as a local run of a linter or static type checker If you are running them as a PR check, the PR is out there. So people will spend time on that PR. no matter if you are fixing the AI comments or not. Best to fix those things BEFORE you provide your code to the team.

[edit] Added part about wasting your teams time

My team uses draft PRs and goes through a process, including AI review, before removing the draft status thereby triggering any remaining human review.

A PR is also a decent UI for getting the feedback but especially so for documenting/discussing the AI review suggestions with the team, just like human review.

AI review is also not equivalent to linter and static checks. It can suggest practices appropriate for the language and appropriate for your code base. Like a lot of my AI experiences it's pretty hit or miss and it's non-deterministic but it doesn't have much cost to disregard the misses and I appreciate the hits.

This just sounds like you haven’t worked in a team environment in the last 12 months.

The ergonomics of doing this in pre-commit make no sense.

Spin up a PR in GitHub and get Cursor and/or Claude to do a code review — it’s amazing.

It’ll often spot bugs (not only obvious ones), it’ll utilise your agent.md to spot mismatched coding style, missing documentation, it’ll check sentry to see if this part of the code touches a hotspot or a LOC that’s been throwing off errors … it’s an amazing first pass.

Once all the issues are resolved you can mark the PR as ready for review and get a human to look big picture.

It’s unquestionably a huge time saver for reviewers.

And having the AI and human review take place with the same UX (comments attached to lines of code, being able to chat to the AI to explain decisions, having the AI resolve the comment when satisfied) just makes sense and is an obvious time saver for the submitter.

Stuff like coding style and missing documentation is what your basic dumb formatter and linter are supposed to do, using a LLM for such things is hilarious overkill and waste of electricity.

Your linter can tell if a comment exists. AI can tell if it’s up to date.

why not have AI review your code BEFORE you share it with the team ? that shows so much more respect to the rest of the team then just throwing your code into the wild, only to change it because some robot tells you that X could be Y

It makes as much sense to use AI in pre-commit as it does to use a linter.

We have AI code reviews enabled for some PR reviews and we discuss them from time to time on the PR to see if it’s worth doing it.

I completely agree.