That code is still LGPL, it doesn't matter what some release engineer writes in the release notes on Github. All original authors and copyright holders must have explicitly agreed to relicense under a different license, otherwise the code stays LGPL licensed.
Also the mentioned SCOTUS decision is concerned with authorship of generative AI products. That's very different of this case. Here we're talking about a tool that transformed source code and somehow magically got rid of copyright due to this transformation? Imagine the consequences to the US copyright industry if that were actually possible.
In the legal system there's no such thing as "code that is LGPL". It's not an xattr attached to the code.
There is an act of copying, and there is whether or not that copying was permitted under copyright law. If the author of the code said you can copy, then you can. If the original author didn't, but the author of a derivative work, who wasn't allowed to create a derivative work, told you you could copy it, then it's complicated.
And none of it's enforced except in lawsuits. If your work was copied without permission, you have to sue the person who did that, or else nothing happens to them.
If anything, the SCOTUS decision would seem to imply that generative AI transformations produce no additional creative contribution and therefore the original copyright holder has all rights to any derived AI works.
(IANAL)
that is a very good formulation of what I have been trying to say
but also probably not fully right
as far as I understand they avoid the decision of weather an AI can produce creative work by saying that the neither the AI nor it's owner/operator can claim ownership of copyright (which makes it de-facto public domain)
this wouldn't change anything wrt. derived work still having the original authors copyright
but it could change things wrt. parts in the derived work which by themself are not derived
The court avoided a decision of what the operator could have copyrighted because he said he was not the author.
That's a reasonable theory though it's stuck with the problem that any model will by its training be derivative of codebases that have incompatible licenses, and that in fact every single use of an LLM is therefore illegal (or at least tortious).
iff it went through the full clean room rewrite just using AI then no, it's de-facto public domain (but also it probably didn't do so)
iff it is a complete new implementation with completely different internal then it could also still be no LGPL even if produced by a person with in depth knowledge. Copyright only cares if you "copied" something not if you had "knowledge" or if it "behaves the same". So as long as it's distinct enough it can still be legally fine. The "full clean room" requirement is about "what is guaranteed to hold up in front of a court" not "what might pass as non-derivative but with legal risk".