I strongly disagree that it was bad faith or strawmanning. The ancestor comment had:
> This makes no sense, and it’s absurd anyone thinks it does. If the AI PR were any good, it wouldn’t need review. And if it does need review, why would the AI be trustworthy if it did a poor job the first time?
This is an entirely unfair expectation. Even the best human SWEs create PRs with significant issues - it's absurd by the parent to say that if a PR is "any good, it wouldn’t need review"; it's just an unreasonable bar, and I think that @latexr was entirely justified in pushing back against that expectation.
As for the "95% correctly", this appears to be a strawman argument on your end, as they said "even if ...", rather than claiming that this is the situation at the moment. But having said that, I would actually like to ask both of you - what does it even mean for a PR to be 95% correct - does it mean that that 95% of the LoC are bug-free, or do you have something else in mind?