I believe it is a narrow view of the situation. If we take a look into the history, into the reasons for inventing GPL, we'll see that it was an attempt to fight copyrights with copyrights. The very name 'copyleft' is trying to convey the idea.

What AI are eroding is copyright. You can re-implement not just a GPL program, but to reverse engineer and re-implement a closed source program too, people have demonstrated it already, there were stories here on HN about it.

AI is eroding copyright, so there may no longer be a need for the GPL. GNU should stop and rethink its stance, chuck away the GPL as the main tool to fight evil software corporations and embrace LLM as the main weapon.

> LLM as the main weapon

LLM's - to date - seem to require massive capital expenditures to have the highest quality ones, which is a monumental shift in power towards mega corporations and away from the world of open source where you could do innovative work on your own computer running Linux or FreeBSD or some other open OS.

I don't think that's an exciting idea for the Free Software Foundation.

Perhaps with time we'll be able to run local ones that are 'good enough', but we're not there yet.

There's also an ethical/moral question that these things have been trained on millions of hours of people's volunteer work and the benefits of that are going to accrue to the mega corporations.

Edit: I guess the conclusion I come to is that LLM's are good for 'getting things done', but the context in which they are operating is one where the balance of power is heavily tilted towards capital, and open source is perhaps less interesting to participate in if the machines are just going to slurp it up and people don't have to respect the license or even acknowledge your work.

> LLM's - to date - seem to require massive capital expenditures to have the highest quality ones, which is a monumental shift in power towards mega corporations and away from the world of open source

Yeah, a bit of a conundrum. But I don't think that fighting for copyright now can bring any benefits for FOSS. GNU should bring Stallman back and see whether he can come with any new ideas and a new strategy. Alternatively they could try without Stallman. But the point is: they should stop and think again. Maybe they will find a way forward, maybe they won't but it means that either they could continue their fight for a freedom meaningfully, or they could just stop fighting and find some other things to do. Both options are better then fighting for copyright.

> There's also an ethical/moral question that these things have been trained on millions of hours of people's volunteer work and the benefits of that are going to accrue to the mega corporations.

I want a clarify this statement a bit. The thing with LLM relying on work of others are not against GPU philosophy as I understand it: algorithms have to be free. Nothing wrong with training LLMs on them or on programs implementing them. Nothing wrong with using these LLMs to write new (free) programs. What is wrong are corporations reaping all the benefits now and locking down new algorithms later.

I think it is important, because copyright is deemed to be an ethical thing by many (I think for most people it is just a deduction: abiding the law is ethical, therefore copyright is ethical), but not for GNU.

>Yeah, a bit of a conundrum.

IMO the primary significant trend in AI. Doesn't get talked about nearly enough. Means the AI is working, I guess.

>GNU should bring Stallman back ... Alternatively they could try without Stallman.

Leave Britney alone >:(

>copyright is deemed to be an ethical thing by many (I think for most people it is just a deduction: abiding the law is ethical, therefore copyright is ethical)

I've busted out "intellectual property is a crime against humanity" at layfolk to see if that shortcuts through that entire little politico-philosophical minefield. They emote the requisite mild shock when such things as crimes against humanity are mentioned; as well as at someone making such a radical statement which seems to come from no familiar species of echo chamber; and then a moment later they begin to very much look like they see where I'm coming from.

How do you even argue such a thing? I've had no such luck, I've met many people who seem to view copyright and a person owning their ideas and work as a sort of inherent moral.

Not saying this gets through to people, but copyright is purely about the legal ability to restrict what other people do. Whereas property rights are about not allowing others to restrict what you do (e.g. by taking your stuff).

>Perhaps with time we'll be able to run local ones that are 'good enough', but we're not there yet.

Right now, we can get local models that you can run on consumer hardware, that match capabilities of state of the art models from two years ago. The improvements to model architecture may or may not maintain the same pace in the future, but we will get a local equivalent to Opus 4.6 or whatever other benchmark of "good enough" you have, in the foreseeable future.

> LLM's - to date - seem to require massive capital expenditures to have the highest quality ones

There are near-SOTA LLM's available under permissive licenses. Even running them doesn't require prohibitive expenses on hardware unless you insist on realtime use.

> running them doesn't require prohibitive expenses on hardware

What async tasks could a local LLM accomplish on Intel 11th gen CPU with 32GB RAM?

> LLM's - to date - seem to require massive capital expenditures to have the highest quality ones, which is a monumental shift in power towards mega corporations and away from the world of open source where you could do innovative work on your own computer running Linux or FreeBSD or some other open OS.

When the FSF and GPL were created, I don't think this was really a consideration. They were perfectly happy with requiring Big Iron Unix or an esoteric Lisp Machine to use the software - they just wanted to have the ability to customize and distribute fixes and enhancements to it.

Maybe a good open source idea is to "seti at home" style crowd-source training, assuming that's possible.

> There's also an ethical/moral question that these things have been trained on millions of hours of people's volunteer work and the benefits of that are going to accrue to the mega corporations.

This was already the case and it just got worse, not better.

At a certain point, I think we had reached a kind of equilibrium where some corporations were decent open source citizens. They understood that they could open source things like infrastructure or libraries and keep their 'crown jewels' closed. And while Stallman types might not have been happy with that, it seemed to work out for people.

Now they've just hoovered up all the free stuff into machines that can mix it up enough to spit it out in a way that doesn't even require attribution, and you have to pay to use their machine.

AI essentially gatekeeps all of open source to companies to pluck from to their hearts content. And individual contributors using these tools and freely mixing it with their own - usual minor - contributions are another step of whitewashing because they're definitely not going to own up to writing only 5% of the stuff they got paid for.

Before we had RedHat and Ubuntu, who at least were contributing back, now we have Microsoft, Anthropic and OpenAI who are racing to lock the barn door around their new captive sheep. It's just a massive IP laundromat.

Is massive capital expenditure not also required to enforce the GPL? If some company steals your GPLed code and doesn't follow the license, you will have to sue them and somebody will have to pay the lawyers.

> Is massive capital expenditure not also required to enforce the GPL?

It's nowhere near the order of magnitude of the kind of spending they're sinking into LLM's. The FSF and other groups were reasonably successful at enforcing the GPL, operating on a budget 1000's of times smaller than that of AI companies.

Right but LLM companies are building frontier models with frontier talent while trying to sock up demand with a loss leader strategy, on top of an historic infrastructure build out.

Being able to coat efficiently run frontier models is i think, not a high priced endeavor for an org (compared to an individual).

IMO the proposition is little fishy, but its not totally without merit and imo deserves investigation. If we are all worried about our jobs, even via building custom for sale software, there is likely something there that may obviate the need at least for end user applications. Again, im deeply skeptical, but it is interesting.

> Being able to coat efficiently run frontier models is i think, not a high priced endeavor for an org

Running proprietary model would make you subject to whatever ToS the LLM companies choose on a particular day, and what you can produce with them, which circles back to the raison d'etre for the GPL and GNU.

Until all software copyright is dead and buried, there is no need for copyleft to change tack. Otherwise there rising tide may rise high enough to drown GPL, but not proprietary software.

Open source is easier to counterfeit/license-launder/re-implement using LLMs because source code is much lower-hanging fruit, and is understood by more people than closed-source assembly.

How close are we to good enough and who's working on that? I would be interested in supporting that work; to my mind, many of the real objections to LLMs are diminished if we can make them small and cheap enough to run in the home (and, perhaps, trained with distributed shared resources, although the training problem is the harder one).

Good question. It seems like most of the tech world is perfectly happy to be sharecroppers on the Big AI farms. I guess that's not quite the right analogy, since they're doing their own things with it; just that at the end of the day, the tool they're building everything on is owned by someone else.

Copyleft is a mirror of copyright, not a way to fight copyright. It grants rights to the consumer where copyright grants rights to the creator. Importantly, it gives the end-user the right to modify the software running on their devices.

Unfortunately, there are cases where you simply can't just "re-implement" something. E.g., because doing so requires access to restricted tools, keys, or proprietary specifications.

These are words of Stallman:

"So, I looked for a way to stop that from happening. The method I came up with is called “copyleft.” It's called copyleft because it's sort of like taking copyright and flipping it over. [Laughter] Legally, copyleft works based on copyright. We use the existing copyright law, but we use it to achieve a very different goal."

https://writings.hongminhee.org/2026/03/legal-vs-legitimate/

> flipping it over.

i.e. mirroring it

> use it to achieve a very different goal."

"very different goal" isn't the same as "fundamentally destroying copyright"

the very different goal include to protect public code to stay public, be properly attributed, prevent companies from just "sizing" , motivate other to make their code public too etc.

and even if his goals where not like that, it wouldn't make a difference as this is what many people try to archive with using such licenses

this kind of AI usage is very much not in line with this goals,

and in general way cheaper to do software cloning isn't sufficient to fix many of the issues the FOSS movement tried to fix, especially not when looking at the current ecosystem most people are interacting with (i.e. Phones)

---

("sizing"): As in the typical MS embrace, extend and extinguish strategy of first embracing the code then giving it proprietary but available extensions/changes/bug fixes/security patches to then make them no longer available if you don't pay them/play by their rules.

---

Through in the end using AI as a "fancy complicated" photocopier for code is as much removing copyright as using a photocopier for code would. It doesn't matter if you use the photocopier blind folded and never looked at the thing you copied.

That’s not a rebuttal of the OP’s point. None of that says anything about fighting copyright. It literally says he flipped it which is wha the OP said when they said it’s a mirror.

> We use the existing copyright law, but we use it to achieve a very different goal.

For the right goal, he should have called it "rightcopy".

> It grants rights to the consumer where copyright grants rights to the creator.

It also grants one major right/feature to the creator, the ability to spread their work while keeping it as open as they intend.

> AI is eroding copyright, so there may no longer be a need for the GPL. GNU should stop and rethink its stance, chuck away the GPL as the main tool to fight evil software corporations and embrace LLM as the main weapon.

Is this LLM thing freely available or is it owned and controlled by these companies? Are we going to rent the tools to fight "evil software corporations"?

There already are LLMs with open weights that are better at code than state of the art closed source models from a year ago. For now, you most people may have to rent the hardware to run those models, since it's too expensive for most people to own something that can run inference on one trillion parameters, but I wouldn't consider LLMs to be controlled by "evil software corporations" at this point.

> There already are LLMs with open weights that are better at code than state of the art closed source models from a year ago.

A year ago, the "state of the art" models were total turds. So this isn't exactly good news

Not to mention the performance of local LLMs makes them utterly unusable unless you have multiple tens of thousands to invest in hardware (and that was before the recent price spike). If you're using commodity hardware, they're just awful to use.

Open models do exist. They’re nowhere near aa good as frontier models, but they’re getting better all the time.

It’s probably only a matter of time before open models are as good as Claude code is today.

With the release of GLM-5, I would say that they are pretty much almost as good. Basically 90% as good as Opus 4.6 on most tasks for 20% of inference cost, and open weights.

easy, we ask Claude to write an open-source freely-available version of Claude with equal or better capabilities.

> chuck away the GPL as the main tool to fight evil software corporations and embrace LLM as the main weapon.

LLMs are one of the primary manifestations of 'evil software corporations' currently.

Until there is a capable open source open weight AI that is easily hostable by an average person - no, we still have a long way to go. You aren't going to have software freedom when the tool that enables it is controlled by a handful of powerful tech companies.

> we'll see that it was an attempt to fight copyrights with copyrights

it's not that simple

yes, GPLs origins have the idea of "everyone should be able to use"

but it also is about attribution the original author

and making sure people can't just de-facto "size public goods"

the kind of AI usage is removing attribution and is often sizing public goods in a way far worse then most companies which just ignored the license did

so today there is more need then ever in the last few decades for GPL like licenses

You've said "size" twice in comments, did you mean "seize"?

Its purpose "if you run the software you should be able to inspect and modify that software, and to share those modifications with your peers" not explicitly resist copyright. Yes copyright is bad in that it often prevents one from doing that, but it is not the purpose of the GPL to dismantle copyright.

Reducing it to "well you can clone the proprietary software you're forced to use by LLM" is really missing the soul of the GPL.

If not for copyright, you could always do that and copyleft wouldn't be needed.

Just because something is copyleft doesn't mean the person who gave you the binary you're using has to supply you with the code the used to build it. That's what the GPL does.

I agree with almost all of that, except the part about GNU changing their stance. I think GNU should stay true and consistent, if for no other reason than to not make many of their supporters who aren't on board with AI feel betrayed and have GNUs legacy soured. If the cause of LLMs conquering proprietary software needs an organization to champion it, let that be a new organization, not GNU.

That's naive. Copyright doesn't just apply to software. There already have been countless lawsuits about copying music long before the term "open source" was invented. No, changing the lyrics a bit doesn't circumvent copyright. Nor does translating a Stephen King novel to German and switching the names of the places and characters.

A court ordered the first Nosferatu movie to be destroyed because it had too many similarities to Dracula. Despite the fact that the movie makes rather large deviations from the original.

If Claude was indeed asked to reimplement the existing codebase, just in Rust and a bit optimized, that could well be a copyright violation. Just like rephrasing A Song ot Ice and Fire a bit, and switching to a different language, doesn't remove its copyright.

Claude was asked to implement a public API, not an entire codebase. The definition of a public API is largely functional; even in an unusually complex case like the Java standard facilities (which are unusually creative even in the structure and organization of the API itself) the reimplementation by Google was found to be fair use.

> Claude was asked to implement a public API, not an entire codebase.

Allegedly. There have been several people who doubted this story. So how to find out who is right? Well, just let Claude compare the sources. Coincidentally, Claude Opus 4.6 doesn't just score 75.6% on SWE-bench Verified but also 90.2% on BigLaw Bench.

It's like our copyright lawyer is conveniently also a developer. And possibly identical to the AI that carried out the rewrite/reimplemention in question in the first place.

> Just like rephrasing A Song ot Ice and Fire a bit, and switching to a different language, doesn't remove its copyright.

There is some precedent for this, e.g. Alchemised is a recent best seller that had just enough changed from its Harry Potter fan fiction source in order to avoid copyright infringement: https://en.wikipedia.org/wiki/Alchemised

(I avoided the term “remove copyright” here because the new work is still under copyright, just not Harry Potter - related copyright.)

That's apparently a different story with different plot, so that's not comparable.

Plots are broadly not copyrightable, “different plot” is less important than “different characters”.

I'm pretty sure the plot is copyrightable, otherwise you could just translate Harry Potter to a different language and change the names of the characters.

This is naive. Advertisement and network effects win. Individuals cannot compete with corporations on equal ground here.

> AI is eroding copyright

Unless it is IP of the same big corpos that consumed all content available. Good luck with eroding them.

So not only are we moving goalposts here, but we've decided the GNU team should join the other team? I don't understand how GNU would see mass model LLM training as anything but the most flagrant violations of their ethos. LLM labs, in their view, would be among the most evil software corporations to have ever existed.

> What AI are eroding is copyright.

At the moment it's people that are eroding copyright. E.g. in this case someone did something.

"AI" didn't have a brain, woke up and suddenly decided to do it.

Realistically nothing to do with AI. Having a gun doesn't mean you randomly shoot.

[deleted]

While I personally agree with you, Richard Stallman (the creator of the GPL) does not. He has always advocated in favor of strong copyright protection, because the foundation of the GPL is the monopoly power granted by copyright. The problem that the GPL is intended to solve is proprietary software.

Generative models (AI) are not really eroding copyright. They are calling its bluff. The very notion of intellectual property depends on a property line: some arbitrary boundary where the property begins and ends. Generative models blur that line, making it impractical to distinguish which property belongs to whom.

Ironically, these models are made by giant monopolistic corporations whose wealth is quite literally a market valuation (stock price) of their copyrights! If generative models ever become good enough to reimplement CUDA, what value will NVIDIA have left?

The reality is that generative models are nowhere near good enough to actually call the bluff. Copyright is still the winning hand, and that is likely to continue, particularly while IP holders are the primary authors of law.

---

This whole situation is missing the forest for the trees. Intellectual Property is bullshit. A system predicated on monopoly power can only result in consolidated wealth driving the consolidation of power; which is precisely what has happened. The words "starving artist" ring every bit as familiar today as any time in history. Copyright has utterly failed the very goals it was explicitly written with.

It isn't the GPL that needs changing. So long as a system of copyright rules the land, copyleft is the best way to participate. What we really need is a cohesive political movement against monopoly power; one that isn't conveniently ignorant of copyright as its most significant source.

Right, anything that can be copied instantly for free cannot be realistically owned.