> We still have no legal conclusion on whether AI model generated code, that is trained on all publicly available source (irrespective of type of license), is legal or not.
That horse has bolted. No one knows where all the AI code any more, and it would no longer possible to be compliant with a ruling that no one can use AI generated code.
There may be some mental and legal gymnastics to make it possible, but it will be made legal because it’s too late to do anything else now.
I hate that this may be true, but I also don't think the law will fix this for us.
I think this is down the community and the culture to draw our red lines on and enforce them. If we value open source, we will find a way to prevent its complete collapse through model-assisted copyright laundering. If not, OSS will be slowly enshittified as control of projects slowly flows to the most profit-motivated entities.
But what tools do we have to stop this happening? I agree, we can (and should) all refuse to participate in licence laundering, but there will always be folks less principled.
I don’t know what happens next, honestly.
I don't either, but I guess we're both about to find out. There only surety is that there will be moves and countermoves. As far as I could tell the best thing we could do right now is fund software-legal organizations like the EFF which are likely to be the ones to litigate the test cases. What's hurting us most right now is we don't know what law means in this context, so we don't fully understand the scale of what we need to protect against or what tools we have that the courts will recognize