But there output is (usually) executable code, and is not committed in a VCS. So the source code is still readable.
When people use LLMs to improve their code, they commit their output to Git to be used as source code.
But there output is (usually) executable code, and is not committed in a VCS. So the source code is still readable.
When people use LLMs to improve their code, they commit their output to Git to be used as source code.
...hmm, at some point we'll need to find a new place to draw the boundaries, won't we?
Until ~2022 there was a clear line between human-generated code and computer-generated code. The former was generally optimized for readability and the latter was optimized for speed at all cost.
Now we have computer-generated code in the human layer and it's not obvious what it should be optimized for.
> it's not obvious what it should be optimized for
It should be optimized for readability by AI. If a human wants to know what a given bit of code does, they can just ask.