> A .claude/settings.local.json

We'll at least it's easy to find the root cause of the problem :/

I don’t think that’s the root cause here. The submitter decided that a 128k line PR was a good thing.

AI is a tool. The problem is software engineering best practices (small, reviewable, incremental self-contained PRs.)

No, he did not. He said it was bad thing. He presented a couple of new features for discussion, with a new electron target. He decided to split it up into individual PR's after positive feedback.

The problem is I can automatically ban tabs if I don’t like them. I Can limit the number of characters per line with a script. I cannot prevent you from sending prs with AI slop, nor can I easily detect it

You can make a bot that auto rejects everything over 5k lines though

‘Hey claude, break this up into 1000 line commits!’

Ah if you can’t easily detect it, wouldn’t that mean it passes muster?

Human beings make relatively predictable mistakes, to the extent that I can skim read large PRs with mental heuristics and work out whether the dev thought carefully about the problem when designing a solution, and whether they hit common pitfalls etc.

AI code generation tends to pass a bunch of those heuristics while generating code that you can only identify as nonsense by completely understanding it, which takes a lot more time and effort. It can generate sensible variable and function names, concise functions, relatively decent documentation etc while misunderstanding the problem space in subtle ways that human beings very rarely do.

Sounds like it raises the bar on verification requirements.

... In a world where someone almost compromised SSL via a detail-missed trust attack... Maybe that's okay?

It’s easy to point out case studies of humans when AI is relatively new and case studies of its commits are limited.

No. They’re not hard to detect because they’re good, they’re “hard” to detect because understanding code takes time, and you’re putting that work on the maintainer.

I find it hard to believe that people who don’t intuit this have ever been on the receiving end. If I fill up your email inbox with LLM slop, would you consider that I’ve done you a favor because some of it’s helpful? Or would you care more about the time you’re wasting on the rest, and that it’d take longer to find the good bits than to make them yourself, and just block me?

If I want to ban variables that end in an ‘S’ I can, and I can write a script to detect them.

I want to ban AI PRs because they cause technical, social and potentially legal problems, but I can’t. If I put a policy in place, people will ignore it, and I can’t write a script to detect it.

This will cause problems for many open source products because ignorant accidental and/or wilful ‘contributors’ will choose to do so and now the responsibility on me is to detect their contributions at scale.

That's like saying if you could not tell the person calling you was a scammer and lost money, then the call passes muster.

As long as the person submitting PR has put in the effort to ensure it is of high quality, it should not matter what tool they used, right?

Well, overwhelming majority vibies seem not to. Welcome to "block all chinese and russian IPs" era, open source AI edition.

[dead]

It depends. Could you easily determine in this case that it was "AI slop"? I have used LLMs before for PRs, but not with having my brain turned off, and it got merged because it was legitimate, and I would have never sent the PR without doing my own careful review. I may be in the minority, who knows.

You are a minority.

That is quite sad if this is true. I would have never submitted a PR without knowing everything about the code I had generated. I manually looked up the RFC for test vectors, because the LLM sucked at it, and I generated the tests and made sure it is correct and in accordance to the RFC. The test included edge cases, too, and all the tests found in the RFC. I had to know a lot, and do my own research a lot, for the LLM to be useful, to be honest. I do not think LLMs are there (yet at least).

[flagged]