>That said, there is a trend among many developers of banning AI. Some go so far as to say “AI not welcome here” find another project.
>This feels extremely counterproductive and fundamentally unenforceable to me. Much of the code AI generates is indistinguishable from human code anyway. You can usually tell a prototype that is pretending to be a human PR, but a real PR a human makes with AI assistance can be indistinguishable.
Isn't that exactly the point? Doesn't this achieve exactly what the whole article is arguing for?
A hard "No AI" rule filters out all the slop, and all the actually good stuff (which may or may not have been made with AI) makes it in.
When the AI assisted code is indistinguishable from human code, that's mission accomplished, yeah?
Although I can see two counterarguments. First, it might just be Covert Slop. Slop that goes under the radar.
And second, there might be a lot of baby thrown out with that bathwater. Stuff that was made in conjunction with AI, contains a lot of "obviously AI", but a human did indeed put in the work to review it.
I guess the problem is there's no way of knowing that? Is there a Proof of Work for code review? (And a proof of competence, to boot?)
Personally, I would not contribute to a project that forced me to lie.
And from the point of view of the maintainers, it seems a terrible idea to set up rules with the expectation that they will be broken.
I know, right. It's like setting up rules saying "you can't use IDE autocomplete" or "you can't code with background music because that distracts you from bugs". If the final result is indistinguishable, I find it perfectly acceptable to lie. Rules are just words, after all, especially if it's completely unenforceable.
Or, the decentralized, no rulers solution: clone the repo on your own website and put your patches there instead.
...YYyyeah, that says a lot about you, and nothing about the project in question.
"Forced you to lie"?? Are you serious?
If the project says "no AI", and you insist on using AI, that's not "forcing you to lie"; that's you not respecting their rules and choosing to lie, rather than just go contribute to something else.
> I guess the problem is there's no way of knowing that? Is there a Proof of Work for code review?
In a live setting, you could ask the submitter to explain various parts of the code. Async, that doesn’t work, because presumably someone who used AI without disclosing that would do the same for the explanation.
Based on interviews I've run, people who use AI heavily have no problem also using it during a live conversation to do their thinking for them there, too.
Well, but why not instead of asking/accepting people will lie undetectably when you say "No AI" and it's okay you're fine with lying, just say instead "Only AI when you spend the time to turn it into a real reviewed PR, which looks like X, Y, and Z", giving some actual tips on how to use AI acceptably. Which is what OP suggests.