Feel like LLMs main sue in these situations would be to work through these essentially nothing-burger issues? If they're essentially just time consuming to solve, rather than problematic, they should be fairly trivial for them to hopefully solve reliably enough right? I'm very doubtful on AI for actual issues a lot of times, but in my experience, it rarely finds bigger issues from scratch without a lot of extra context such as some hints towards what and where the issue is, and essentially full context explaining any relevant parts to it. However I do find that it often find minor issues when the context is small and contained, or as mentioned when it knows what the issue is, and the solution is simple.

I'm sure there's already plenty of work towards these things, but do bigger code bases completely shut out AI right now, due to the extreme amount of unsolicited PRs they get from AIs? I'd imagine if they were coordinated and structured properly on these things, they'd be more likely to be seen as an acceptable thing? I'm just spitballing, never worked on any real open source project, especially one where there's thousands if not millions of users and several issues every day, so my view on AI usage in these are mostly just from some instances where they ban all AI PRs and stuff like that because they are often really bad.