Is it really a surprise that the project that declared a blanket ban on LLM-generated code is also emotional and childish in other areas?

A blanket ban on LLM-generated code is a completely reasonable position. If someone couldn't be bothered to write the code, why should anyone else bother to read it, let alone merge it?

Not wanting to review and maintain code that someone didn't even bother to write themselves is childish?

This argument obviously makes no sense. Especially when one of the examples is a 7 character diff.

But it's fine to say "this PR makes no sense to me explain it better please" and close it.

Denying code not on it's merits but it's source is childish.

But to determine its merit a maintainer must first donate their time and read through the PR.

LLMs reduce the effort to create a plausible PR down to virtually zero. Requiring a human to write the code is a good indicator that A. the PR has at least some technical merit and B. the human cares enough about the code to bother writing a PR in the first place.

It's absolutely possible to use an LLM to generate code, carefully review, iterate and test it and produce something that works and is maintainable.

The vast majority of of LLM generated code that gets submitted in PRs on public GitHub projects is not that - see the examples they gave.

Reviewing all of that code on its merits alone in order to dismiss it would take an inordinate amount of time and effort that would be much better spent improving the project. The alternative is a blanket LLM generated code ban, which is a lot less effort to enforce because it doesn't involve needing to read piles and piles of nonsense.

> Denying code not on it's merits but it's source is childish.

No, its pretty standard legal policy actually.

I think most people are in complete agreement.

What people don't like about LLM PRs is typically:

a. The person proposing the PR usually lacks adequate context and so it makes communication and feedback, which are essential, difficult if not impossible. They cannot even explain the reasoning behind the changes they are proposing, b. The volume/scale is often unreasonable for human reviewed to contend with. c. The PR may not be in response to an issue but just the realization of some "idea" the author or LLM had, making it even harder to contextualize. d. The cost asymmetry, generally speaking is highly unfavorable to the maintainers.

At the moment, it's just that LLM driven PRs have these qualities so frequently that people use LLM bans as a shorthand since writing out a lengthy policy redescrbiing the basic tenets of participation in software development is tedious and shouldn't be necessary, but here we are, in 2025 when everyone has seemingly decided to abandon those principles in favor of lazyily generating endless reams of pointless code just because they can.

Brandolini's law

Usually I hate quoting "laws" but think about it. I do agree that it would be awesome if we scrutinize 10+k lines of code to bring big changes but its not really feasible is it?

I don't see how the two are related at all. A blanket ban on LLM-generated code is at least arguably a reasonable policy.

> A blanket ban on LLM-generated code is at least arguably a reasonable policy.

No, I don't think it is. There's more nuance to this debate than either "we're banning all LLM code" or "all of our features are vibe coded".

A blanket ban on unreviewed LLM code is a perfectly reasonable way to mitigate mass-produced slop PRs, but it is not reasonable to ban all code generated by an LLM. Not only is it unenforceable, but it's also counterproductive for people who genuinely get value out of it. As long as the author reviews the code carefully before opening a PR and can be held responsible, there's no problem.

Banning all LLM code doesn't mean they see things in binary terms like that. There is nuance between "all code must have 100% test coverage" and "tests are a waste of time", for instance, but that doesn't mean a project that adopts one of those policies thinks the middle ground doesn't exist.

A blanket ban is really the only sensible thing to do so that no time is wasted for both sides (contributors know upfront that there's no point trying to get an AI-generated PR accepted - so they won't waste time creating one, and project maintainers don't waste time reviewing what might be broken AI slop - even if some AI generated PRs would be acceptable from a quality point of view).

When there's a grey zone then there will be lots of pointless discussions like "why was this AI-generated PR accepted but not mine" etc etc...

Perhaps you misunderstood my comment. I'm not advocating for vibe-coded AI-generated PRs, and I do think that blanket banning them is pretty reasonable for the reasons you stated.

However, I don't think that banning all AI-generated code is reasonable. Having an LLM generate a couple of functions or a bit of boilerplate in an otherwise manually coded PR should not invalidate it from being accepted if it's helpful.

Given my own experience working on compiler stuff with LLM, I'd say it's a very good decision.

LLMs jump at the first opportunity to use regex for EVERYTHING instead of doing proper lexing/parsing, for example. You need to repeatedly tell it not to use regex. In the end you might as well hand write your code, because you actually know how it works, unlike a clueless LLM.

No wonder they moved to Codeberg. Those kinds of projects tend to do the ol' move to Codeberg for whatever reason. If I had to put an analogy to it, Codeberg is like Kick and Github is like Twitch.

Purity testing. I mean, one of the first lines in their announcement is relating to politics.