I don't see how the two are related at all. A blanket ban on LLM-generated code is at least arguably a reasonable policy.

> A blanket ban on LLM-generated code is at least arguably a reasonable policy.

No, I don't think it is. There's more nuance to this debate than either "we're banning all LLM code" or "all of our features are vibe coded".

A blanket ban on unreviewed LLM code is a perfectly reasonable way to mitigate mass-produced slop PRs, but it is not reasonable to ban all code generated by an LLM. Not only is it unenforceable, but it's also counterproductive for people who genuinely get value out of it. As long as the author reviews the code carefully before opening a PR and can be held responsible, there's no problem.

Banning all LLM code doesn't mean they see things in binary terms like that. There is nuance between "all code must have 100% test coverage" and "tests are a waste of time", for instance, but that doesn't mean a project that adopts one of those policies thinks the middle ground doesn't exist.

A blanket ban is really the only sensible thing to do so that no time is wasted for both sides (contributors know upfront that there's no point trying to get an AI-generated PR accepted - so they won't waste time creating one, and project maintainers don't waste time reviewing what might be broken AI slop - even if some AI generated PRs would be acceptable from a quality point of view).

When there's a grey zone then there will be lots of pointless discussions like "why was this AI-generated PR accepted but not mine" etc etc...

Perhaps you misunderstood my comment. I'm not advocating for vibe-coded AI-generated PRs, and I do think that blanket banning them is pretty reasonable for the reasons you stated.

However, I don't think that banning all AI-generated code is reasonable. Having an LLM generate a couple of functions or a bit of boilerplate in an otherwise manually coded PR should not invalidate it from being accepted if it's helpful.