[dead]

I think you're right about a chunk of these cases but honestly I've also seen experienced devs do the same thing. like senior people who absolutely can write the code themselves but use AI to go faster and then skip the careful review because they trust their own judgment - they figure if they prompted it right the output is probably fine. and sometimes it is fine. but the failure mode is different from what you're describing, it's not illiteracy it's overconfidence. they know enough to think they'd catch a problem but the AI generates something that passes their mental model without triggering any alarms. the auth bypass example I mentioned - that was from someone who'd been writing auth code for years, they just didn't expect the LLM to quietly drop a check that was in the original code they were refactoring. so yeah the desperate-to-hide-illiteracy crowd is real and a problem but I think the more dangerous version is competent people who stopped being paranoid because the code looks right

[dead]

I don't think it proves the rule though, I think it's two completely separate failure modes that happen to look similar in code review. the illiterate crowd submits AI code because they can't write it themselves - sure. but the experienced crowd submits AI code because they wrote a good prompt and the output looked reasonable and they moved on to the next ticket. the second group is harder to catch because their PRs have the right structure, reasonable variable names, comments that make sense. you're not gonna flag it the way you'd flag someone who clearly doesn't understand what a middleware chain does. idk maybe I'm wrong about the proportions but in the codebases I've worked on the scary bugs came from people who should have known better, not from people who never knew in the first place. the illiterate ones get caught in review. the competent ones get a rubber stamp because everyone trusts them

[dead]

yeah I think we actually agree on the volume part, I'm not disputing that. my point is more about which group causes the bugs that make it to production. the garbage PRs from the illiterate crowd - those get caught. someone submits a PR where the error handling is clearly copy pasted from a chatbot and the variable names are arg1 arg2, that's an easy reject. but when your senior engineer submits something that looks clean because they prompted well and skimmed the output, that sails through review. I've literally seen a race condition introduced this way that sat in prod for weeks because the PR looked like something that person would write. so yeah the volume problem is real but I think it's a distraction from the harder problem