The people likely to submit low-effort contributions are also the people most likely to ignore policies restricting AI usage.

The people following the policies are the most likely to use AI responsibly and not submit low-effort contributions.

I’m more interested in how we might allow people to build trust so that reviewers can positively spend time on their contributions, whilst avoiding wasting reviewers time on drive-by contributors. This seems like a hard problem.

I wonder if the right call wouldn't be impose a LOC limit on contributions (sensibly chosen for the combination of language/framework/toolset).

I quite like this direction. Limit new contributors to small contributions, and then relax restrictions as more of their contributions are accepted.

The people who write the most shitty AI code seem to be the proudest of their use of AI.