I want this question to have an interesting answer, but everyone knows that if this question ever goes to the courts, ownership will go to the people in charge with the money. The idea that Anthropic may not own Claude Code just because Claude wrote it is wishful thinking.
Best part is, it's likely to have a different answer in every country, who knows what'll happen, not every country implicitly sides with the ones with the most money.
Well, eventually it'll probably be added to the Berne Convention agreement or some such.
That's my feeling on the endgame too, but it'll probably be a decade before we get anywhere near it.
Depends on where they pay their taxes generally.
I love that genAI art will not be copyrightable and genAI code will be. The power of the Almighty Dollar at work.
The work-for-hire doctrine actually supports your intuition more than the AI authorship question does. The reason Anthropic likely owns Claude Code has little to do with whether Claude wrote it and everything to do with the employment contracts of the engineers who directed it. The DMCA takedown question is genuinely interesting though because DMCA requires the claimant to assert copyright ownership in good faith. If a court later found the codebase was predominantly AI-authored and therefore not copyrightable, the 8,000 takedowns could be challenged as bad faith DMCA claims. That is a different and more tractable legal question than the ownership one.
I have trouble believing that the DMCA claims would be found to be in bad faith when they were made at a time when the question of what degree of human input is required to acquire copyright on AI generate code hasn't been resolved at all.
It doesn't seem like bad faith to think that copyright is stronger than the courts end up thinking, just being mistaken.
fair correction, updated the piece to reflect this. Bad faith under DMCA requires knowing the claim is false, not merely being wrong. A good faith belief in copyright ownership, even one that turns out to be mistaken, is a defense. The more accurate framing is that if the codebase is found to be predominantly AI-authored, the takedowns would fail on the threshold question of whether there is a valid copyright to assert, which is a different issue from intent.
Work-for-hire doctrine doesnt automagically absolve you from IP law. Microsoft and Intel already learned this in the nineties when they paid San Francisco Canyon Company to steal Apple code.
https://en.wikipedia.org/wiki/San_Francisco_Canyon_Company
LLMs are just code stealers, will gladly generate Carmacks inverse for you with original comments.
The San Francisco Canyon case is a good example of exactly the right distinction. Work-for-hire determines who owns the output, but if the process of creating that output involved copying protected material, the infringement claim runs separately. The piece makes this point on the open source contamination section: owning the output and having a clean chain of title to the output are different questions. You can own AI-generated code and still have a copyleft problem in it.
I can't see how that can work.
As a developer, the fact that my source code passed through a compiler - an automated tool - doesn't give the author of the compiler any claim on my executable code.
As an artist, the fact that I used, e.g., Rebelle to paint a digital painting, or that I used Lightroom (including generative AI to fill, or other ML/AI tools to de-noise and sharpen my image) in editing a photograph, doesn't give EscapeMotion, Adobe, or Topaz, any claims to my product.
Why, then, would there be any chance that use of a tool like Claude - a tool that's super-advanced to be sure, but at the end of the day operates by way of a mathematical algorithms - would confer any claims to Anthropic?
If a court later found the codebase was predominantly AI-authored and therefore not copyrightable
Is figuring out the appropriate prompts to use in directing Clause qualitatively different than using a (much) higher-level abstraction in coding? That is, there was never any talk as we climbed the abstraction layer from machine code to assembly to Fortran or C to 4GLs to Rust etc., that the assembler/compiler/IDE builder would have any ownership claim on the produced executable. In what sense can Anthropic et al assert that their tool, which just transforms our directives to some lower-level representation, creates ownership of that lower-level representation?
It's not wishful thinking, and ownership isn't a foregone conclusion.
Sure the courts could mint a communist society with a few weird decisions about property rights, but this being the US do you really suppose that's likely?
There's really no legal question of any kind that models aren't people and therefore cannot own property (and also cannot enter into legal contract as would be required to reassign the intellectual property they don't and can't own)
The catch-22 is that the fact that models aren't people is only relevant if you treat them similar to a person. Like the US Copyright Office's opinion which treats it similar to a freelancer. If you treat the LLM as a machine similar to a camera, with the author expressing their existing intent through the tools of this machine, ownership is back on the table and more or less how it was before LLMs.
Well if the camera in addition to choosing autoexposure also decided how to frame the shots, which lens to use, where to stand, and everything else salient to the artistry of photography -- all without direct human intervention, then I would think the situation would again be analogous. If the camera could do all that because an intern was holding it, the intern would still own the shots even if their employer gave them the assignment.
That's why the intern signs an employment contract that reassigns their rights to their employer!!
They won't want to own code that is malicious\illegal\used in crime, although it's really weird to me that no one (in LEO) seems to care that, for example, grok generates CSAM, revenge porn, probably other illegal things, so they'll probably get to have their cake and eat it too.
Those things have precise legal definitions which it may not be entirely clear that an LLM can even generate them - especially in the USA where the 1st covers things that many would think illegal (and are illegal in other countries).
I'm not sure Anthropic would appreciate the liability that ownership would imply.
Too late to edit, but OpenAI certainly doesn't want ownership or liability, for the CSAM they've produced. They certainly don't want ownership/liability of code which does $ONLYAWFULTHING.