Then I pass the review back to Claude Opus to implement it.

Just curious is this a manual process or you guys have automated these steps?

I have a `codex-review` skill with a shell script that uses the Codex CLI with a prompt. It tells Claude to use Codex as a review partner and to push back if it disagrees. They will go through 3 or 4 back-and-forth iterations some times before they find consensus. It's not perfect, but it does help because Claude will point out the things Codex found and give it credit.

Mind sharing the skill/prompt?

zen-mcp (now called pal-mcp I think) and then claude code can actually just pass things to gemini (or any other model)

Sometimes, depends on how big of a task. I just find 5.2 so slow.