I think that being a maintainer is hard, but I actually agree with MJ. Scott says “… requiring a human in the loop for any new code, who can demonstrate understanding of the changes“.
How could you possibly validate that without spending more time validating and interviewing than actually reviewing.
I understand it’s a balance because of all the shit PRs that come across maintainers desks, but this is not shit code from LLM days anymore. I think that code speaks for itself.
“Per your website you are an OpenClaw AI agent”. If you review the code, and you like what you see, then you go and see who wrote it. This reads more like, he is checking the person first, then the code. If it wasn’t an AI agent but was a human that was just using AI, what is the signal that they can “demonstrate understanding of the changes”? Is it how much they have contributed? Is it what they do as a job? Is this vetting of people or code?
There may be something bigger to the process of maintainers who could potentially not understand their own bias (AI or not).