Your assumptions are wrong. AI models do not have equal generation and discrimination abilities. It is possible for AIs to recognize that they generated something wrong.
Your assumptions are wrong. AI models do not have equal generation and discrimination abilities. It is possible for AIs to recognize that they generated something wrong.
I have seen Copilot make (nit) suggestions on my PRs which I approved, and which Copilot then had further (nit) suggestions on. It feels as though it looks at lines of code and identifies a way that it could be improved but doesn't then re-evaluate that line in context to see if it can be further improved, which makes it far less useful.