Human beings make relatively predictable mistakes, to the extent that I can skim read large PRs with mental heuristics and work out whether the dev thought carefully about the problem when designing a solution, and whether they hit common pitfalls etc.
AI code generation tends to pass a bunch of those heuristics while generating code that you can only identify as nonsense by completely understanding it, which takes a lot more time and effort. It can generate sensible variable and function names, concise functions, relatively decent documentation etc while misunderstanding the problem space in subtle ways that human beings very rarely do.
Sounds like it raises the bar on verification requirements.
... In a world where someone almost compromised SSL via a detail-missed trust attack... Maybe that's okay?
It’s easy to point out case studies of humans when AI is relatively new and case studies of its commits are limited.