I feel like open source is taking the wrong stance here. There’s a lot of gatekeeping, first. And second, this approach is like trying to stop a tsunami with an umbrella. AI is here to stay. We can’t stop it, for much we try.

I feel the successful OS projects will be the ones embracing the change, not stopping it. For example, automating code reviews with AI.

> I feel the successful OS projects will be the ones embracing the change, not stopping it.

Yes, you feel. And the author feels differently. We don't have evidence of what the impact of LLMs will be on a project over the long term. Many people are speculating it will be pure upside, this author is observing some issues with this model and speculating that there will be a detriment long-term.

The operative word here is "speculating." Until we have better evidence, we'll need to go with our hunches & best bets. It is a good thing that different people take different approaches rather than "everyone in on AI 100%." If the author is wrong time will tell.

Yes. Exactly. We’re both “feeling” without much proof. But between the two speculations, one is more open and welcoming, while the other is more restrictive.

When you waste time trying to deal with "AI" generated pull-requests, in your free time, you might change your mind.

I share code because I think it might be useful to others. Until very recently I welcomed contributions, but my time is limited and my patience has become exhausted.

I'm sorry I no longer accept PRs, but at the same time I continue to make my code available - if minor tweaks can be made to make that more useful for specific people they still have the ability to do that, I've not hidden my code and it is still available for people to modify/change as they see fit.

I disagree, this looks like the first signs that mass producing AI code without understanding hits a bottleneck at human systems. These open source responses have been necessary because of the volume of low quality contributions. It’ll be interesting to watch the ideas develop, because I agree that AI is here to stay.

It's becoming clearer by the day why people are incapable of using LLMs responsibly, so the only sensible response is a total ban on such activity if you hope to keep some quality and sanity in your project.

OSS projects usually has culture which adopting quality aimed development practices much faster that commercial projects (because of cost of adoption) so it looks like same concerns eventually will hit other kind of projects.

If you can TELL someone used AI, its always, without fail, a bad use of AI.

I disagree with that. I can easily tell when my non-native English speaking coworkers use AI to help with their communications. Nine times out of ten, their communication has been improved through the use of AI.

if only there was a difference between native languages aiming at lossy fluency (feels better) and programming languages aiming at deterministic precision.

I can't find a single place in TFA (which doesn't represent or claim to represent open source writ large) that's encouraging people to not use AI.

> So how should you use an LLM to contribute?

> Use an LLM to develop your comprehension. Then communicate the best you can in your own words, then use an LLM to tweak that language. If you’re struggling to convey your ideas with someone, use an LLM more aggressively and mention that you used it. This makes it easier for others to see where your understanding is and where there are disconnects.

> There needs to be understanding when contributing to Django. There’s no way around it. Django has been around for 20 years and expects to be around for another 20. Any code being added to a project with that outlook on longevity must be well understood.

> There is no shortcut to understanding. If you want to contribute to Django, you will have to spend time reading, experimenting, and learning. Contributing to Django will help you grow as a developer.

> While it is nice to be listed as a contributor to Django, the growth you earn from it is incredibly more valuable.

> So please, stop using an LLM to the extent it hides you and your understanding. We want to know you, and we want to collaborate with you.

This advice is 95% not actionable and 100% not verifiable. It's full of hand-wavy good intentions. I understand completely where it's coming from, but 'trying to stop a tsunami with an umbrella' is a very good analogy - on one side, you have the above magical thinking, on the other, petaflops of compute which improve their reasoning capabilities exponentially.

It's eminently actionable -- the Django maintainers can decide their sensitivity/tolerance for false positives and operate from there. That's what every other open source project is doing.

(Again, I must emphasize that this is not telling people to not use LLMs, any more than telling people to wear a seatbelt would somehow be telling them to not drive a car.)

Literally the first line of the article:

"Spending your tokens to support Django by having an LLM work on tickets is not helpful. You and the community are better off donating that money to the Django Software Foundation instead."

That's not telling people to not use LLMs. It's telling them that using them in a specific way is not helpful.

Reading beyond the first line makes it clear that the problem is a lack of comprehension, not LLM use itself. Quoting:

> This isn’t about whether you use an LLM, it’s about whether you still understand what’s being contributed.

Then they could just say "understand what's being contributed" and not have to mention LLMs by name at all. They are very clearly blanket discouraging people from using LLMs at all when contributing to their project.

GhosTTY accepts LLM contributions, but has strict rules around it: https://github.com/ghostty-org/ghostty/blob/main/AI_POLICY.m...

I accept LLM contributions to most of my projects, but have (only slightly less) strict rules around it. (My biggest rule is that you must acknowledge the DCO with an appropriate sign-off. If you don't, or if I believe you don't actually have the right to sign off the DCO, I will reject your change.) I will also never accept LLM-generated security reports on any of my projects.

I contribute to chezmoi, which has a strict no-LLM contribution (of any kind) policy. There've been a couple of recent user bans because they used LLM‡ and their contributions — in tickets, no less — included code instructions that could not have possibly worked.

Those of us who have those rules do so out of knowledge and self-respect, not out of gatekeeping or ignorance. We want people to contribute. We don't want garbage.

I think that there needs to be something in the repo itself (`.llm-permissions`?) which all agents look at and follow. Something like:

    # .llm-permissions
    Pull-Requests: No
    Issues: No
    Security: Yes
    Translation Assistance: Yes
    Code Completion: Yes
On those repos where I know there's no LLM permissions, I add `.no-llm` because I've instructed Kiro to look for that file before doing anything that could change the code. It works about 95% of the time.

The one thing that I will never add or accept on my repos is AI code review. This is my code. I have to stand behind it and understand it.

‡ I disagree with those bans for practical reasons because the zero-tolerance stance wasn't visible everywhere to new contributors. I would personally have given these contributors one warning (closed and locked the issue and invited them to open a new issue without the LLM slop; second failure results in permanent ban). But I also understand where the developer of chezmoi is coming from.

> I feel the successful OS projects will be the ones embracing the change

You'll have to embrace the `ccc` compiler first, lol