[dead]

I don't think it proves the rule though, I think it's two completely separate failure modes that happen to look similar in code review. the illiterate crowd submits AI code because they can't write it themselves - sure. but the experienced crowd submits AI code because they wrote a good prompt and the output looked reasonable and they moved on to the next ticket. the second group is harder to catch because their PRs have the right structure, reasonable variable names, comments that make sense. you're not gonna flag it the way you'd flag someone who clearly doesn't understand what a middleware chain does. idk maybe I'm wrong about the proportions but in the codebases I've worked on the scary bugs came from people who should have known better, not from people who never knew in the first place. the illiterate ones get caught in review. the competent ones get a rubber stamp because everyone trusts them

[dead]

yeah I think we actually agree on the volume part, I'm not disputing that. my point is more about which group causes the bugs that make it to production. the garbage PRs from the illiterate crowd - those get caught. someone submits a PR where the error handling is clearly copy pasted from a chatbot and the variable names are arg1 arg2, that's an easy reject. but when your senior engineer submits something that looks clean because they prompted well and skimmed the output, that sails through review. I've literally seen a race condition introduced this way that sat in prod for weeks because the PR looked like something that person would write. so yeah the volume problem is real but I think it's a distraction from the harder problem