I disagree. The problem with AI slop is not so much that it's from AI, but that it's pretty much always completely unreadable and unmaintainable code. So just tell the contributor that their work is not up to standard, and if they persist they will get banned from contributing further. It's their job to refactor the contribution so that it's as easy as possible to review, and if AI is not up to the task this will obviously require human effort.

You're giving way too much credit to the people spamming these slop PRs. These are not good faith contributions by people trying to help. They are people trying to get pull requests merged for selfish reasons, whether that's a free shirt or something to put on their resume. Even on the first page of closed ghostty PRs I was able to find some prime slop[0]. It is a huge waste of time for a maintainer to nicely tell people like this they need to refactor. They're not going to listen.

edit; and just to be totally clear this isn't an anti-AI statement. You can still make valid, even good PRs with AI. Mitchell just posted about using AI himself recently[1]. This is about AI making it easy for people to spam low-quality slop in what is essentially a DoS attack on maintainers' attention.

[0] https://github.com/ghostty-org/ghostty/pull/10588

[1] https://mitchellh.com/writing/my-ai-adoption-journey

If you can immediately tell "this is just AI slop" that's all the review and "attention" you need; you can close the PR and append a boilerplate message that tells the contributor what to do if they want to turn this into a productive contribution. Whether they're "good faith contributors trying to help" or not is immaterial if this is their first interaction. If they don't get the point and spam the repo again then sure, treat them as bad actors.

The thing is, the person will use their AI to respond to your boilerplate.

That means you, like John Henry, are competing against a machine at the thing that machine was designed to do.

...and waste valuable time reviewing AI slop? it looks surprisingly plausible, but never integrates with the bigger picture.