And my entire point is that LLM-generated feature requests are strongly correlated with high risk merge requests / pull requests, to which the commenter made no meaningful argument against. Instead the commenter chose to focus on the size of the PR and say “well I’ve seen it in the wild”.

The way to get around this without getting all the LLM influencer bros in an uproar is to come up with a system that allows open source libraries to evaluate the risk of a PR (including the author’s ability to explain wtf the code does) without referencing AI because apparently it’s an easily-triggered community.