So...big caveat that this is still under review, so what we're talking about is a moving target, but based on what I can see, it seems considerably more nuanced than that. They basically ban LLM-authored code, with a careful carve-out to run an experiment to try to get only high-quality LLM PRs:

> It's fine to use LLMs to answer questions, analyze, distill, refine, check, suggest, review. But not to create.

> We carve out a space for "experimentation" to inform future revisions to this policy.

Importantly, the LLM contributions must be solicited, i.e., the people responsible for reviewing the final implementation have to opt in explicitly beforehand.