That’s just it though it’s not just your head. The liability could very likely also fall on the Linux foundation.
You can’t say “you can do this thing that we know will cause problems that you have no way to mitigate, but if it does we’re not liable”. The infringement was a foreseeable consequence of the policy.
This policy effectively punts on the question of what tools were used to create the contribution, and states that regardless of how the code was made, only humans may be considered authors.
From the foundation's point of view, humans are just as capable of submitting infringing code as AI is. If your argument is sound, then how can Linux accept contributors at all?
EDIT: To answer my own question:
This is how the Foundation protects itself, and the policy is that a contribution must have a human as the person who will accept the liability if the foundation comes under fire. The effectiveness of this policy (or not) doesn't depend on how the code was created.Anyone distributing copyrighted material can be liable that DCO isn’t going to stop anyone.
If that worked any corporation that wanted to use code they legally couldn’t could just use a fork from someone who assumed responsibility and worst case they’d have to stop using it if someone found out.
> liability could very likely also fall on the Linux foundation.
It’s just the same as if I copy-paste proprietary code into the kernel and lie about it being GPL.
Is the Linux foundation liable there?
Maybe. DCOs haven’t been tested. But you can at least say that the person who did this committed fraud and that you had no reasonable way to know they would do that.
LLMs can and do regurgitate code without the user’s knowledge. That’s the problem, the user has no way to mitigate against it. You’re telling contributors “use this thing that has a random chance of creating infringing code”. You should have foreseen that would result in infringing code making its way into the kernel.
If someone sent you some code and said “it’s all good bro, you can put it in the kernel with your name on it”, would you?
If you don’t feel comfortable about where some code has come from, don’t sign your name.
The fact LLMs exist and can generate code doesn’t change how you would behave and sign your name to guarantee something.
The only lawsuits so far have been over training on open source software. You're inventing a liability problem that essentially does not exist.
OpenAI and Anthropic added an indemnity clause to their enterprise contracts specifically to cover this scenario because companies wouldn’t adopt otherwise.