I agree this is very sane and boring. What is insane is that they have to state this in the first place.

I am not against AI coding in general. But there are too many people "contributing" AI generated code to open source projects even when they can't understand what's going on in their code just so they can say in their resumes that they contributed to a big open source project once. And when the maintainer call them out they just blame it on the AI coding tools they are using as if they are not opening PRs under their own names. I can't blame any open source maintainer for being at least a little sceptical when it comes to AI generated contributions.

I think them stating this very simple policy should also be read as them explicitly not making a more restrictive policy, as some kernel maintainers were proposing.

From everything I'm seeing in the industry (I'm basically a noncoder choosing to not use AI in the stuff that I make, and privy to the private work experience of coders and creators also in that field because of human social contacts), I feel like I can shed a bit of light.

It looks to me like a more restrictive policy will be flat-out impossible.

Even people I trust are going along with this stuff, akin to CAD replacing drafting. Code is logic as language, and starting with web code and rapidly metastasizing to C++ (due to complexity and the sheer size of the extant codebase, good and bad) the AI has turned slop-coding to a 'solved problem'. If you don't mean to do the best possible thing or a new thing there is no excuse for existing as a coder in the world of AI.

If you do expect to do a new thing or a best thing, in theory you're required to put out the novel information as AI cannot reach it until you've entered it into the corpus of existing code the AI's built on. However, if you're simply recombining existing aspects of the code language in a novel way, that might be more reachable… that's probably where 'AI escape velocity' will come from should it occur.

In practice, everybody I know is relegating the busywork of coding to AI. I don't feel social pressure to do the same but I'm not a coder. I'm something else that produces MIT-licensed codebases for accomplishing things that aren't represented in code AS code, rather it's for accomplishing things that are specific and experiential. I write code to make specific noises I'm not hearing elsewhere, and not hearing out of the mainstream of 'sound-making code artifacts'.

Therefore, it's impractical for Linux to take any position forbidding AI-assisted code. People will just lie and claim they did it. Is primitive tab-complete also AI? Where's the line? What about when coding tools uniformly begin to tab-complete with extensive reasoning and code prototyping? I already see this in the JetBrains Rider editor I use for Godot hacking, even though I've turned off everything I can related to AI. It'll still try to tab-complete patterns it thinks it recognizes, rarely with what I intend.

And so the choice is to enforce responsibility. I think this is appropriate because that's where the choices will matter. Additions and alterations will be the responsibility of specific human people, which won't handle everything negative that's happening but will allow for some pressures and expectations that are useful.

I don't think you can be a collaborative software project right now and not deal with this in some way. I get out of it because I'm read-only: I'm writing stuff on a codebase that lives on an antique laptop without internet access that couldn't run AI if it tried. Very likely the only web browsers it can run are similarly unable to handle 2026 web pages, though I've not checked in years. You've only got my word for that, though, and your estimation of my veracity based on how plausible it seems (I code publically on livestreams, and am not at all an impressive coder when I do that). Linux can't do what I do, so it's going to do what Linux does, and this seems the best option.

You can refuse to use AI personally, but why would you not help yourself when you can?

… my dad is 86 and only after I signed him up to Claude could he write Arduino code without a phone call to me after 5 minutes of trying himself. So now, he’s spending 4+ hours at a time focused writing code and building circuits of things he only dreamt about creating for decades.

Unless you’re doing something for the personal love of the craft and sharpening your tools, use every advantage you can get in order to do the job.

But… as above, if you’re doing it for the love of it, sure - hand crafted code does taste better and you know all the ingredients are organic

Nah. I'm only interested in the bits it doesn't know. Why would someone else's regurgitated whatever be what I wanted, why would that be help in any way?

Or just let people do the job the way they want.

On the other hand, it seriously sucks to spend time learning a big codebase and modifying it with care, only to not be given the time of day when you send the patches to the maintainers. Sometimes the reward for this human labor isn't a sincere peer review of the work and a productive back-and-forth to iron out issues before merging, it's to watch one's work languish unnoticed for a long time only for the maintainer to show up after the fact and write his own fix or implementation while giving you a shout out in the commit message if you're lucky.

Can't really blame people for reducing their level of effort. It's very easy to put in a lot of effort and end up with absolutely nothing to show for it. Before AI came along, my realization was that begging the maintainers to implement the features I wanted was the right move. They have all the context and can do it better than us in a fraction of the time it'd take us to do it. Actually cloning someone else's repository and working on it should only be attempted if one is willing to literally fork it and own the project should things go south. Now that we have AI, it's actually possible to easily understand and modify complex codebases, and I simply cannot find the will to blame people for using it to the fullest extent. Getting the AI to maintain the fork is really easy too.

> I agree this is very sane and boring. What is insane is that they have to state this in the first place.

I don't think it's insane. It seems reasonable that people could disagree about how much attribution and disclosure there should be about AI assistance, or if it's even allowed, etc.

Every document in that `process` directory explains stuff that could be obvious to some people but not others.

That's a dim view, people also contribute to make projects work for their own needs with hopes to share fixes with others. Like if I make a fix to vLLM to make a model load on particular hardware, I can verify functionality (LLM no longer strays off topic) and local plausibility (global scales are being applied to attention layers), but I can't pretend to understand full math of the overall process and will never have enough time to do so. So, I can be upfront about AI assist and then maintainer can choose to double check, or else if they don't have time, I guess I can just post a PR link on model's huggingface page and tell others with same hardware they can try to cherrypick it.

What's missed is that neither contributors nor maintainers are usually paid for their effort and nobody has standing to demand that they do anything they are not doing already. Don't like a messy vibe coded PR but need functionality? Then clean it up yourself and send improved version for review. Or let it be unmerged. But don't assign work to others you don't employ.

On the other hand, companies like NVIDIA should be publicly taken to task for changing their mind about instruction set for every new GPU and then not supporting them properly in popular inference engines, they certainly have enough money to hire people who will learn vLLM inside out and ensure high quality patches.