I feel like the distinction is equivalent to

    LLMs can make mistakes. Humans can't.
Humans can and do make mistakes all the time. LLMs can automate most of the boring stuff, including unit tests with 100% coverage. They can cover edge cases you ask them to and they can even come up with edge cases you may not have thought about. This leaves you to do the review.

I think think the underlying problem people have is they don't trust themselves to review code written by others as much as they trust themselves to implement the code from scratch. Realistically, a very small subset of developers do actual "engineering" to the level of NASA / aerospace. Most of us just have inflated egos.

I see no problem modelling the problem, defining the components, interfaces, APIs, data structures, algorithms and letting the LLM fill in the implementation and the testing. Well designed interfaces are easy to test anyway and you can tell at a glance if it covered the important cases. It can make mistakes, but so would I. I may overlook something when reviewing, but the same thing often happens when people work together. Personally I'd rather do architecture and review at a significantly improved speed than gloat I handcrafted each loop and branch as if that somehow makes the result safer or faster (exceptions apply, ymmv).

"I feel like these distinctions are equivalent to

    LLMs can make mistakes. Humans can't."
No, that's not it. The difference between humans and AI is that AI suffers no embarrassment or shame when it makes mistakes, and the humans enthusiastically using AI don't seem to either. Most humans experience a quick and viseral deterrent when they publish sloppy code and mistakes are discovered. AI, not at all. It does not immediately learn from its mistakes like most humans do.

In the rare case when there is a human that is consistently persistently confidently wrong like AI, a project can identify that person and easily stop wasting their time working with that person. With masses of people being told by the vocal AI shills how amazing AI is, projects can easily be flooded with confidently wrong aaI generated PRs.

If unit tests are boring chores for you, or 100% coverage is somehow a goal in itself, then your understanding of quality software development is quite lacking overall. Tests are specifications: they define behavior, set boundaries, and keep the inevitable growth of complexity under control. Good tests are what keep a competent developer sane. You cannot build quality software without starting from tests. So if tests are boring you, the problem is your approach to engineering. Mature developers dont get bored chasing 100% coverage – they focus on meaningful tests that actually describe how the program is supposed to work.

> Tests are specifications: they define behavior, set boundaries, and keep the inevitable growth of complexity under control.

I set boundaries during design where I choose responsibilities, interfaces and names. Red Green Refactor is very useful for beginners who would otherwise define boundaries that are difficult to test and maintain.

I design components that are small and focused so their APIs are simple and unit tests are incredibly easy to define and implement, usually parametrized. Unit tests don't keep me "sane", they keep me sleeping well at night because designing doesn't drive me mad. They don't define how the "program" is supposed to work, they define how the unit is supposed to work. The smaller the unit the simpler the test. I hope you agree: simple is better than complex. And no, I don't subscribe to "you only need integration tests".

Otherwise, nice battery of ad hominems you managed to slip in: my understanding of quality software is lacking, my problem is my approach to engineering and I'm an immature developer. All that from "LLMs can automate most of the boring stuff, including unit tests with 100% coverage." because you can't fathom how someone can design quality software without TDD, and you can't steelman my argument (even though it's recommended in the guidelines [1]). I do review and correct the LLM output. I almost always ask it for specific test cases to be implemented. I also enjoy seeing most basic test cases and most edge cases covered. And no, I don't particularly enjoy writing factories, setups, and asserts. I'm pretty happy to review them.

[1] https://news.ycombinator.com/newsguidelines.html Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.

> LLMs can automate most of the boring stuff, including unit tests with 100% coverage. They can cover edge cases you ask them to and they can even come up with edge cases you may not have thought about. This leaves you to do the review.

in my experience these tests don't test anything useful

you may you have 100% test coverage, but it's almost entirely useless but not testing the actual desired behaviour of the system

rather just the exact implementation