I've only recently started trying out using LLMs to help me write code (as in, within the last two weeks), and the workflow that makes the most sense to me is to not let the LLM anywhere close to PRs/MRs/CRs, or even version control at all. I've found it useful to give it a fairly constrained task (something that might be a 100-200 line modification of my current code), literally watch the output of Claude Code's "thinking" as it goes to potentially interrupt it if it's going down the wrong path or if it gives me a better idea, wait for it to present the code, and then read through all of it to make sure it's what I want. After making whatever small changes I might want, I commit, and then move onto the next thing. So far, this has pretty much all been for personal side projects outside of work, so there is no code review, but approaching it from the standpoint that the goal is produce the same code and version control history I would want if I created it by hand and just using the LLM as way of automating the typing, I've been pretty surprised that it's already been a net gain in efficiency for a lot of things I've been working on. Ideally, the code I'm generating shouldn't be distinguishable from what I'm already writing, because I would change it if I saw that it was. At that point, either it's high-quality enough to be merged, or it's not and should be rejected, and that's already how things work in the first place. If someone makes an MR that their coworkers find sloppy and annoying to review, there needs to be pushback, and how it was generated should be irrelevant if everyone is on the same page about where the bar for quality is and is acting in good faith. (If you're working in an environment where there's no bandwidth to care about quality or people are acting in bad faith, LLM code will probably not be much of an improvement, but you're also probably going to have a bad time regardless, and unfortunately I don't think there's a magic bullet for fixing that).