So because some projects can absorb some PRs of a certain size, all projects of should be able to absorb PRs of that same size?
This anecdotal argument is a dead end. The nuance is clear: not all software is the same, and not all edits to software are the same.
>So because some projects can absorb some PRs of a certain size, all projects of should be able to absorb PRs of that same size?
Your argument has nothing to do with AI and more to do with PR size and 'fire and forget' feature merges. That's what the commenter your responding to is pointing out.
And my entire point is that LLM-generated feature requests are strongly correlated with high risk merge requests / pull requests, to which the commenter made no meaningful argument against. Instead the commenter chose to focus on the size of the PR and say “well I’ve seen it in the wild”.
The way to get around this without getting all the LLM influencer bros in an uproar is to come up with a system that allows open source libraries to evaluate the risk of a PR (including the author’s ability to explain wtf the code does) without referencing AI because apparently it’s an easily-triggered community.