I don't understand this point of view at all. There's a symmetry that is going entirely unappreciated by most of the comments in the thread: just as I can give Claude X,000 words of text to use to describe the code I want it to write, I can also give it some existing code and ask for X,000 words of text explaining what it does. (Call it, oh, I don't know, a "spec," maybe.)

The explanation, in turn, can be fed back to recreate the functionality of the original code.

At that point, why care about the code at all? If it works, it works. If it doesn't, tell the model to fix it. You did ask for tests, right?

That is where we're indisputably headed. It's not quite a lossless loop yet, but those who say it won't or can't happen bear a heavy burden of proof.

Code is not spec. There is an implementation spectrum.

On one end, you have code that can perform only the behaviour explicitly declared in the spec, but has to be thrown away and rewritten for any new or updated spec.

On the other end, you have code that implements or anticipates a wide range of future possible specs including the given one.

The AI can operate on any point on this spectrum, but it's not very good at choosing. The more complex the software, the more such choices need to be made.

When the number of bad choices reaches a certain critical mass, even a skilled engineer becomes powerless to undo all the bad choices, and even a powerful model becomes unable to reduce it back to a coherent spec.

Code is not spec.

It is now, and vice versa. Deal with it.

following along with the amazon analogy...

Some people are mindful about what they get and don't get from amazon and don't die from prosperity. ("you might use AI to increase your prosperity")

the rest of the world eats too much and dies of heart disease/diabetes. ("the rest of the world will flounder more and AI will do more stuff to them than for them")