Yeah I think we're actually agreeing more than it seems. I'm not arguing for shipping without review - more that the review itself is where things fall through.
In practice, that "single review at the end" is often a 500-line diff that someone skims at 5pm. The sandbox did its job, the code runs, tests pass. But the reviewer misses that the auth middleware doesn't actually check token expiry, or that there's a path traversal buried in a file upload handler. Not because they're bad at reviewing - because AI-generated code has different failure modes than human-written code and we're not trained to spot them yet.
Scanning tools don't replace review, they're more like a checklist that runs before the human even looks at it. Catches the stuff humans consistently miss so the reviewer can focus on logic and architecture instead of hunting for missing input validation.