In most cases I've seen it's because they get overwhelmed by sloppy contributions from developers who do not bother to review their AI's output. Code reviews are a lot of work.
Also “responsibility” and “accountability” mean little for anon contributors from the internet. You can ban them but a thousand more will still be spamming you with slop.
I think AI bans are more common in projects where the maintainers are nice people that thoughtfully want to consider each PR and provide a reasoned response if rejected.
That’s only feasible when the people who open PRs are acting in good faith, and control both the quality and volume of PRs to something that the maintainers can realistically (and ought to) review in their 2-3 hours of weekly free time.
Linux is a bit different. Your code can be rejected, or not even looked at in the first place, if it’s not a high quality and desired contribution.
Also, it’s not just about PR quality, but also volume. It’s possible for contributions to be a net benefit in isolation. But most open source maintainers only have an hour or so a week to review PRs and need to prioritize aggressively. People who code with AI agents would benefit themselves to ask “does this PR align with the priorities and time availability of the maintainer?”
For instance, I’m sure we could point AI at many open source projects and tell it to optimize performance. And the agent would produce a bunch of high quality PRs that are a good idea in isolation. But what if performance optimization isn’t a good use of time for a given maintainer’s weekly code review quota?
Sure, maintainers can simply close the PR without a reason if they don’t have time.
But I fear we are taking advantage of nice people, who want to give a reasoned response to every contribution, but simply can’t keep up with the volume that agents can produce.
You are treating humans as reasonable actors. They very often are not. On easy to access platforms like github you can have humans just working as intermediaries between LLM and the github. Not actually checking or understanding what they put in a pull request. Banning these people outright with clear rules is much faster and easier than trying to argue with them.
Linux is somewhat harder to contribute to and they already have sufficient barriers in place so they can rely on more reasonable human actors.
Because you don't want to deal with people who can't write their own code. If they can, the rule will do nothing to stop them from contributing. It'll only matter if they simply couldn't make their contribution without LLMs.
An LLM finding problems in code is not the same at all as someone using it to contribute code they couldn't write or haven't written themselves to a project. A report stating "There is a bug/security issue here" is not itself something I have to maintain, it's something I can react to and write code to fix, then I have to maintain that code.
Well, until you start getting dozens of generated reports that you take your time to review just to find out that they're all plausible-looking bullshit about non-issues.
We already had that happening with other kinds of automated tooling, but at least it used to be easier to detect by quick skimming.
Because they aren’t accountable - after it is merged only I am. And why would I want to go back and forth with an LLM through PR comments when I could just talk to the agent myself in real time? Anytime I want to work through a pile of slop I can ask for one, but I don’t work that way. I work with the agent to create plans first and refine them, and the author of a PR who couldn’t do that adds nothing.
> I work with the agent to create plans first and refine them, and the author of a PR who couldn’t do that adds nothing.
As someone who has been using AI extensively lately, this is my preferred way of doing serious projects with them:
Let them create the plan, help them refine it, let them rip; then scrutinize their diffs, fight back on the parts I don't like or don't trust; rinse and repeat until commit.
Yet I assume this would still be unacceptable to most anti-AI projects, because 90%+ of the committed code was "written by the AI."
> why would I want to go back and forth with an LLM through PR comments when I could just talk to the agent myself in real time?
Presumably for the same reason you go back and forth with humans through PR comments even when you could just code it yourself in real time. That reason being, the individual on the other end of the PR should be saving you time. It's still hard work contributing quality MRs, even with AI.
I don’t have a problem working with contributors who use AI like you described. But this thread is about working with people who could not do the work on their own. So they cannot do what you described, and they cannot save me any time, they can only waste it.
If your doctor told you he used an ouija board to find your diagnosis, would you care about the origin of the diagnosis or just trust that he'll be accountable for it?
In most cases I've seen it's because they get overwhelmed by sloppy contributions from developers who do not bother to review their AI's output. Code reviews are a lot of work.
Also “responsibility” and “accountability” mean little for anon contributors from the internet. You can ban them but a thousand more will still be spamming you with slop.
It is no more insane than doing the opposite. This whole business has yet to play itself out.
And yet it puts a stop to the tsunami of slop and it's pretty much impossible to prove anything of value was lost.
but why? it's a human making the PR and you can shame/ban that human anyway.
I think AI bans are more common in projects where the maintainers are nice people that thoughtfully want to consider each PR and provide a reasoned response if rejected.
That’s only feasible when the people who open PRs are acting in good faith, and control both the quality and volume of PRs to something that the maintainers can realistically (and ought to) review in their 2-3 hours of weekly free time.
Linux is a bit different. Your code can be rejected, or not even looked at in the first place, if it’s not a high quality and desired contribution.
Also, it’s not just about PR quality, but also volume. It’s possible for contributions to be a net benefit in isolation. But most open source maintainers only have an hour or so a week to review PRs and need to prioritize aggressively. People who code with AI agents would benefit themselves to ask “does this PR align with the priorities and time availability of the maintainer?”
For instance, I’m sure we could point AI at many open source projects and tell it to optimize performance. And the agent would produce a bunch of high quality PRs that are a good idea in isolation. But what if performance optimization isn’t a good use of time for a given maintainer’s weekly code review quota?
Sure, maintainers can simply close the PR without a reason if they don’t have time.
But I fear we are taking advantage of nice people, who want to give a reasoned response to every contribution, but simply can’t keep up with the volume that agents can produce.
Volume - things take time to review. If you’re inundated with so many PRs then it’s harder to curate in general
> it's a human making the PR
Is it? Remember when that agent wrote a hit piece about the maintainer because he wouldn't merge it's PR?
That's a different issue actually.
You are treating humans as reasonable actors. They very often are not. On easy to access platforms like github you can have humans just working as intermediaries between LLM and the github. Not actually checking or understanding what they put in a pull request. Banning these people outright with clear rules is much faster and easier than trying to argue with them.
Linux is somewhat harder to contribute to and they already have sufficient barriers in place so they can rely on more reasonable human actors.
That takes effort that I'd rather spend doing other things.
Not insane at all. Just a very useful shortcut. Not everyone wants to move fast and break shit.
I still think it's insane, why would you care about the "origin" of the code as long as there is a human accountable (that you can ban anyway)?
Because you don't want to deal with people who can't write their own code. If they can, the rule will do nothing to stop them from contributing. It'll only matter if they simply couldn't make their contribution without LLMs.
So tomorrow, if a model genuinely find a bunch of real vulnerabilities, you just would ignore them? that makes no sense.
An LLM finding problems in code is not the same at all as someone using it to contribute code they couldn't write or haven't written themselves to a project. A report stating "There is a bug/security issue here" is not itself something I have to maintain, it's something I can react to and write code to fix, then I have to maintain that code.
Well, until you start getting dozens of generated reports that you take your time to review just to find out that they're all plausible-looking bullshit about non-issues.
We already had that happening with other kinds of automated tooling, but at least it used to be easier to detect by quick skimming.
Because they aren’t accountable - after it is merged only I am. And why would I want to go back and forth with an LLM through PR comments when I could just talk to the agent myself in real time? Anytime I want to work through a pile of slop I can ask for one, but I don’t work that way. I work with the agent to create plans first and refine them, and the author of a PR who couldn’t do that adds nothing.
> I work with the agent to create plans first and refine them, and the author of a PR who couldn’t do that adds nothing.
As someone who has been using AI extensively lately, this is my preferred way of doing serious projects with them:
Let them create the plan, help them refine it, let them rip; then scrutinize their diffs, fight back on the parts I don't like or don't trust; rinse and repeat until commit.
Yet I assume this would still be unacceptable to most anti-AI projects, because 90%+ of the committed code was "written by the AI."
> why would I want to go back and forth with an LLM through PR comments when I could just talk to the agent myself in real time?
Presumably for the same reason you go back and forth with humans through PR comments even when you could just code it yourself in real time. That reason being, the individual on the other end of the PR should be saving you time. It's still hard work contributing quality MRs, even with AI.
I don’t have a problem working with contributors who use AI like you described. But this thread is about working with people who could not do the work on their own. So they cannot do what you described, and they cannot save me any time, they can only waste it.
If your doctor told you he used an ouija board to find your diagnosis, would you care about the origin of the diagnosis or just trust that he'll be accountable for it?
If the Ouija board was powered by Opus, who knows :D
It's just a form of sanctimonious virtue-signaling that's trendy right now.