Hard disagree. Humans fail in ways I know, can predict, and know where to look for. ML coding assistants fail in all sorts of idiotic ways and thus every damn line needs to be scrutinized.
Hard disagree. Humans fail in ways I know, can predict, and know where to look for. ML coding assistants fail in all sorts of idiotic ways and thus every damn line needs to be scrutinized.
What actually scares me is the idea that with humans you can manage to follow their train of thought. But if LLM just rewrites everything each time, well that is impossible to follow and then there is same work to be done over and over again each review.
> Humans fail in ways I know, can predict, and know where to look for.
You clearly haven't worked with humans.