I don't know, it's a pretty leap for me to consider AI being hard to distinguish from human contributions.

AI is predictive at a token level. I think the usefulness and power of this has been nothing short of astonishing; but this token prediction is fundamentally limiting. The difference between human _driven_ vs AI generated code is usually in design. Overly verbose and leaky abstractions, too many small abstractions that don't provide clear value, broad sweeping refactors when smaller more surgical changes would have met the immediate goals, etc. are the hallmarks of AI generated code in my experience. I don't think those will go away until there is another generational leap beyond just token prediction.

That said, I used human "driven" instead of human "written" somewhat intentionally. I think AI in even its current state will become a revolutionary productivity boosting developer aid (it already is to some degree). Not dissimilar to a other development tools like debuggers and linters, but with much broader usefulness and impact. If a human uses AI in creating a PR, is that something to worry about? If a contribution can pass review and related process checks; does it matter how much or how little AI was used in it's creation?

Personally, my answer is no. But there is a vast difference between a human using AI and an AI generated contribution being able to pass as human. I think there will be increasing degrees of the former, but the latter is improbable to impossible without another generational leap in AI research/technology (at least IMO).

---

As a side note, over usage of AI to generate code _is_ a problem I am currently wrangling with. Contributors who are over relying on vibecoding are creating material overhead in code review and maintenance in my current role. It's making maintenance, which was already a long tail cost generally, an acute pain.