> The argument assumes that unassisted PR authorship is what builds trustworthy contributors, and that LLM assistance prevents that growth.

No, I don't think that was the argument. As I understood it, unassisted contributions have higher chances to grow a trusted contributor. Not 100% vs 0% chances, but statistically higher. So, given limited resources, it makes sense to prefer unassisted over assisted contributions.

I don't believe that even the weakened version of the argument works -- it is based on an assumption, not fact.

Why would a contributor that uses AI assistance have fewer chances to be trusted?

I'm not talking about AI slop, but a contributor that takes time to understand a problem, find a solution, and discuss pros/cons alternatives. Using LLM assistance, of course.

Because you are at the whims of the bot they are at least partially dependent on.

You could extend that argument to any tool used by the developer, like a linter, sanitizer, the IDE itself, or even auto-completion. Why target LLMs specifically?

The more I think about it, the more nonsensical it is. - What if I do everything by hand, but have an LLM review my work at the very end? - What if I have an LLM guide me through the codebase just by specifying the files I should read and in what order, but I do all the reading myself? - What if I do everything by hand, but then use an LLM to optimize a small part of an algorithm?

You can easily see how absurd it is to completely ban LLMs.

What matters is the quality and correctness of the contribution. Even with heavy LLM usage, unless the developer understands what problem they're solving, the quality will be sub-par.

Would you let your nanny subcontract?