> This does nothing to shield Linux from responsibility for infringing code.

It’s no worse than non-AI assisted code.

I could easily copy-paste proprietary code, sign my name that it’s not and that it complies with the GPL and submit it.

At the end of the day, it just comes down to a lying human.

That’s the difference. In practice a human has to commit fraud to do this.

But a human just using an LLM to generate code will do it accidentally. The difference is that regurgitation of training text is a documented failure mode of LLMs.

And there’s no way for the human using it to be aware it’s happening.

You can not accidentally sign your name saying “this code is GPL compliant”

If you can’t be sure, don’t sign.

Yes but if you do that manually you are in bad faith, if you ask an AI to do it you have no idea if you are going to be liable of something or not.