When the best AI models are the same or better than the best[1] human developers, what then?
We're already at the point talking about best vs. best.
When the best AI models are the same or better than the best[1] human developers, what then?
We're already at the point talking about best vs. best.
If that happens and we have a way of reliably knowing if some code is produced to that high quality, then I think we probably can accept that AI coding is the only sensible option.
We definitely are not close to that point though and it's unclear if/when we will get there.
It seems to me that people might be arguing from conflicting hidden premises here. "AI Coding" is a spectrum that could mean something as simple as letting the LLM proofread your changes and then act on those with your own human brain, or it could mean just telling the agent what you want and let it rip and tear until it is done.
If I do the latter and submit a PR to something like Zig, I'll be certainly caught doing it and rightfully chastised. If I do the former, my PR will be better without anybody besides myself having any way of knowing how it got better. Probably I do something in between when I contribute to open-source these days.
Blanket banning all of these seems like a bad idea to me. It actively gates people like myself from contributing, because I respect these people and projects that much. It feels like I would be doing something they find disgusting if my work has touched an LLM and I obviously don't want to do that to people I respect. But it's fine, there are plenty of things to do in the world even when some doors are closed.
I do not presume to have any say on Zig project's well argued decisions[0] -- I'm not really even their user let alone someone important like a contributor. Their point of preferring human contact is superb, frankly. Probably a different kind of problem in an open-source project staffed with a lot of remote working people, where human contact is scarce.
https://kristoff.it/blog/contributor-poker-and-ai/
How can AI possibly be better than “the best” when the corpus of training data now includes its own slop in addition to all the code by new devs/lazy devs/bad devs scattered all over the internet? Law of averages applies here.
Because LLM models are obviously much more than the sum of their parts.
Oh, which parts are those? Do tell!