> disclosure if "a significant portion of the contribution is taken from a tool without manual modification", and labeling of such contributions with "a clear disclaimer or a machine-readable tag like '[AI-Generated]'.
Quixotic, unworkable, pointless. It’s fundamentally impossible (at least without a level of surveillance that would obviously be unavceptable) to prove the “artisanal hand-crafted human code” label.
> contributors should "fully understand" their submissions and would be accountable for the contributions, "including vouching for the technical merit, security, license compliance, and utility of their submissions".
This is in the right direction.
I think the missing link is around formalizing the reputation system; this exists for senior contributors but the on-ramp for new contributors is currently not working.
Perhaps bots should ruthlessly triage in-vouched submissions until the actor has proven a good-faith ability to deliver meaningful results. (Or the principal has staked / donated real money to the foundation to prove they are serious.)
I think the real problem here is the flood of low-effort slop, not AI tooling itself. In the hands of a responsible contributor LLMs are already providing big wins to many. (See antirez’s posts for example, if you are skeptical.)
> Quixotic, unworkable, pointless. It’s fundamentally impossible (at least without a level of surveillance that would obviously be unavceptable) to prove the “artisanal hand-crafted human code” label.
Difficulty of enforcing is a detail. Since the rule exists, it can be used when detection is done. And importantly it means that ignoring the rule means you’re intentionally defrauding the project.
Debian has always been Debian and thus there are these purist opinions, but perhaps my take too would be something along the "one-strike-and-you-are-out" kind of a policy (i.e., you submit slop without being able to explain your submission in any way) already followed in some projects:
https://news.ycombinator.com/item?id=47109952
Yeah this is what I was getting at with “reputation” - I think the world where anyone can submit a patch and get human eyes on it is a thing of the past.
IIRC Mitchell Hashimoto recently proposed some system of attestations for OSS contributors. It’s non-obvious how you’d scale this.
This is like trying to stop spam by banning emails that send you spam.
They can spin up LLM-backed contributors faster than you can ban them.
If the situation becomes that worse, I agree with you; otherwise, I don't see that as a problem.
Banning AI would hardly stop that, the LLM contributors would simply claim they're not AI.
Hence why banning AI contributions is meaningless, you literally only punish 'good' actors.
I agree. If the real concern is the flood of low-effort slop, unmaintainable patches, accidental code reuse, or licensing violations, then the process should target those directly. The useful work is improving review and triage so those problems get filtered out early. The genie is already out of the bottle with AI tooling, so broad “no AI” rules feel like a reaction to the tool and do not seem especially useful or enforceable.