Yep, that’s why my forks of all their libraries with bugs fixed such as https://github.com/pmarreck/zigimg/commit/52c4b9a557d38fe1e1... will never ever go back to upstream, just because an LLM did it. Lame, but oh well- their loss. Also, this is dumb because anyone who wants fixes like this will have to find a fork like mine with them, which is an increased maintenance burden.

The commit you listed was merged upstream.

https://github.com/zigimg/zigimg/pull/313

So does that mean they contradicted their own no LLM policy?

The PR doesn't disclose that "an LLM did it", so maybe the project allowed a violation of their policy by mistake. I guess they could revert the commit if they happen to see the submitter's HN comment.

Dunno but a commenter already noted that some begins to say: "No LLM generated PR, but we'll accept your prompt" and another person answered he saw that too.

It makes lots of sense to me.

Hugely unpopular opinion on HN, but I'd rather use code that is flawed while written by a human, versus code that has been generated by a LLM, even if it fixes bugs.

I'd gladly take a bug report, sure, but then I'd fix the issues myself. I'd never allow LLM code to be merged.

Any thoughts on why you have that preference?

Because human errors are, well, human. And producing code that contains those errors is a human endeavor. It bases on years, decades of learning. Mistakes were made, experience was gained, skills were improved. Reasoning by humans is relatable.

Generating slop using LLMs takes seconds, has no human element, no work goes into it. Mistakes made by an LLM are excused without sincerity, without real learning, without consequence. I hate everything about that.

I agree.

just like... don't tell them a LLM did it?

That's a dick move because you are opening up an open source project to claims of infringement without recourse.

Why on earth would you force stuff on a party that has said they don't want that?

Sure, but back in reality no you’re not? No more than any other contributor?

If I want to use an auto-complete then I can, and I will? Restricting that is as regressive as a project trying to specify that I write code from a specific country or… standing on my head.

Sure, if they want me to add a “I’m writing this standing on my head” message in the PR then I will… but I’m not.

No, you can't. See, that's where you are just wrong: when you don't respect the boundaries an open source project sets that you want to contribute to then you are a net negative.

Restricting this is their right, and it is not for you to attempt to overrule that right. Besides the fact that you do not oversee the consequences it also makes you an asshole.

They're not asking for you to write standing on your head, they are asking for you to author your contributions yourself.

They are asking me to author my contributions in a way that they approve of. The essence of the request is the same as asking someone to author them whilst standing on their head.

Except they don’t, won’t and can’t control that: the very request is insulting.

I’ll make a change any way I choose, upright, sideways, using AI. My choice. Not theirs.

Their choice is to accept it or reject it based purely on the change itself, because that’s all there is.

So, "might makes right", essentially?

No, just a normal reaction to someone trying to force their beliefs on you.

If you know there's a bug, why not just properly fix it and get it merged, instead of outsourcing that fix?

Even before AI, getting a fix into an open source project required a certain level of time and effort. If you prefer to spend your time on other things, and you assume it will eventually get fixed by someone else, using an LLM to fix it just for yourself makes sense.