I actually checked the PR because I was curious if a cutting-edge AI can generate 128k lines of quality code. I mean, if that's true it's great!

Here is what I noticed while reading the PR:

- The PR has surpurisingly little meat. It contains 128k lines, but most of them are AI-generated documentation (86K lines, 68%). It also contains 9K lines of AI-generated tests (7%). So the actual code is just 32K lines (25%).

- For what it's worth mentioning, the documentation is bad. It mostly feels like a copy-and-paste from someone's LLM session. You can check it out yourself: https://github.com/OpenCut-app/OpenCut/blob/b883256/docs/iss...

- I have no deep understanding of OpenCut project, but the code seems buggy? I observe that it casually removes a few logics from the original code without any reason. So it's plausible that the PR is not only useless but harmful to merge.

So my takeaway is that a latest commercial LLM is not getting there, at least yet.

Great, so now the old "add 30k lines of auto-generated selenium tests to a project and put it on your resume" has a new AI step that amplifies it to 130k lines.

>- The PR has surprisingly little meat. It contains 128k lines, but most of them are AI-generated documentation (86K lines, 68%). It also contains 9K lines of AI-generated tests (7%). So the actual code is just 32K lines (25%).

When you hear about a huge PR or change this should be your default assumption regardless of whether AI or otherwise.

Most huge PRs are only a few thousand lines of "serious logic" code. That code then spawns a bunch of duplication of logic, stuff like adding a dozen few thousand line handling routines to convert a dozen inputs into some single thing. Those then spawn several times their own line count in docs and tests and whatnot.