Isn't it basically not possible for the input data set list to be listed? It's an open secret all these labs are using immense amounts of copyrighted material.
There's a few efforts at full open data / open weight / open code models, but none of them have gotten to leading-edge performance.
My brain was largely trained using immense amounts of copyrighted material as well. Some of it I can even regurgitate almost exactly. I could list the names of many of the copyrighted works I have read/watched/listened to. I suppose my brain isn't open source, although I don't think it would currently be illegal to take a snapshot of my brain and publish it if the technology existed and open-source that. Granted, this would only be "reproducible" from source if you define the "source" as "my brain" rather than all of the material I consumed to make that snapshot.
:-) I like the symmetry of this. If I want to keep my creations outside the hands of others, I can keep them private. I don’t have to publish these words or broadcast them to the world. I could write this on my laptop, save it in a file, and keep it to myself. Fine.
However, once these words are broadcast—once they’re read, and the ideas expressed here enter someone else’s mind—I believe it’s only fair that the person on the receiving end has the right to use, replicate, or create something from them. After all, they lent me their brain—ideas that originated in my mind now live in theirs.
This uses up their mental "meat space," their blood sugar, and their oxygen—resources they provide. So, they have rights too: the right to do as they please with those ideas, including creating any and all data derived from them. Denying them that right feels churlish, as if it isn’t the most natural thing in the world.
(Before people jump on me:- Yes, creators need to be compensated—they deserve to make a living from their work. But this doesn’t extend to their grandchildren. Copyright laws should incentivize creation, not provide luxury for the descendants of the original creator a century later.)
> Some of it I can even regurgitate almost exactly
If you (or any human) violate copyright law, legal redress can be sought. The amount of damage you can do is limited because there's only one of you vs the marginal cost of duplicating AI instances.
There are many other differences between humans and AI in terms of capabilities and motivations to f the legal persons making decisions.
You may be right about the damage (will not dispute it even if I personally doubt it) - but what about the amount of good that it can do too? When deciding "what is to be done now" under uncertainty, we typically look at both sides of the ledger, the upsides in addition to the downsides.
Assume for a moment, that the current AI is teaching us that compute transforming data → information → knowledge → intelligence → agency → ... → AGI → ASI, is all there is to Intelligence-on-Tap? And imagine an AI path opens to AGI now and ASI later, where previously we didn't see any. Seems a bad deal to me, to frustrate, slow down, or even forego the 2050-s Intelligence Revolution that may multiply total human wealth by a factor of 10 to 20 in value, the way the Industrial Revolution did in the 1800-s. And we are to forego this, for what - so that we provide UBI to Disney shareholders? Every one of us is richer, better off now, than any king of old. Not too long ago, even the most powerful person in the lands could not prevent their 17 miscarriages/stillbirths/child_deaths failing to produce an heir to ascend the throne (a top priority that was, for sure for a king+queen). So in our imagined utopia, even the Disney shareholders are better off than they would be otherwise.
> Seems a bad deal to me, to frustrate, slow down, or even forego the 2050-s Intelligence Revolution that may multiply total human wealth by a factor of 10 to 20 in value...
Why do you assume the emergence of a super intelligence would result in human wealth increasing instead of decreasing? Looking at how humans with superior technology used it to exploit fellow humans throughout history should give you pause. Humans don't care about the aggregate "dog wealth" - let alone that of ants.
I'm assuming the Intelligence Revolution, multiplying Human Intelligence with machines, will have the same effect as the Industrial Revolution had, on multiplying human physical strength. That multiplied the GDP by a factor of ~20 times, hockey stick like, in a fairy short time, a century or two.
The industrial revolution was powered by natural resources that it helped unlock. What value reserve will ai tap into to create hockey stick growth?
It will recombine the existing resources in new ways. Neanderthals had access to exactly the same natural resources as we have now. Obviously we do much more with what we both got, then they ever did. Obviously it's not only the availability of some atoms or molecules, but what one does with them, how one recombines them in novel ways. For that one needs knowledge and energy. And the later mostly turns out can be derived from the the former too.
Obviously it's what we do with them, the biotech manufacturing and nuclear power production revolution happened pre AI. The reason it hasn't replaced petroleum is economic and social.
The amount of damage you can do is limited because there's only one of you vs the marginal cost of duplicating AI instances
But enough about whether it should be legal to own a Xerox machine. It's what you do with the machine that matters.
> It's what you do with the machine that matters.
The capabilities of a machine matter a lot under law. See current US gun legislation[1], or laws banning export of dual-use technology for examples of laws that have inherent capabilities - not just the use of the thing- as core considerations.
1. It's illegal to possess a new, automatic weapon with some grandfathering prior to 1986
While true, computers in general alreay had the ability to perfectly replicate data, hence blank media tax: https://en.wikipedia.org/wiki/Private_copying_levy
I think the reason for all the current confusion is that we previously had two very distince groups of "mind" and "mindless"*, and that led to a lot of freedom for everyone to learn a completly different separation hyperplane between the categories, and AI is now far enough into the middle that for some of us it's on one side and for others of us it's on the other.
* and various other pairs that are no longer synonyms but they used to be; so also "person" vs. "thing", though currently only very few actually think of AI as person-like
Yes, but gun control and dual-use export regulations are both stupid. We need fewer tool-blaming laws, not more.
(Standing by for the inevitable even-goofier analogy comparing AI with privately-owned nuclear arsenals...)
The only way this would work is with "leaks". But even then as we saw with everything on the internet, it just added another guardrail on content. Now I can't watch youtube videos without logging in, and nearly every website I need to solve some weird ash captchas. It's becoming easier to interact with this chatbots rather than search for a solution online. And I wonder with Veo 4 copy cats, it might be even easier to prompt for a video rather than search for one.
That doesn't mean it isn't possible.
“Not possible” = “a business-destroying level of honesty”?
Even if training on the copyrighted material is OK, just providing a data dump of it almost certainly is not.
No need for a data dump, just list all URLs or whatever else of their training data sources. Afaik that's how the LAION training dataset was published.
providing a large list of bitrotted URLs and titles of books which the user should OCR themselves before attempting to reproduce the model doesn't seem very useful.
Aren't the datasets mostly shared in torrents? They probably won't bitrot for some time.
...no? They also use web crawlers.
The datasets are collected using web crawlers, but that doesn’t tell us anything about how they are stored and re-distributed, right?
Why would you store the data after training?
Are you saying that you know they don’t store the data after training?
I’d just assume they did because—why scrape again if you want to train a new model? But if you know otherwise, I’m not tied to this idea.
I'm also assuming. But I would ask the opposite question: why store all that data if you'll have to scrape again anyway?
You will have to scrape again because you want the next AI to get trained on updated data. And, even at the scale needed to train an LLM, storing all of the text on the entire known internet is a very non-trivial task!
If you try to reproduce various open datasets like fineweb by scraping the pages again, you can't, because a lot of the pages no longer exist. That's why you would prefer to store them instead of losing the content forever.
It's not "all of the text", it's like less than 100 trillion tokens, which means less than 400TB assuming you don't bother to run the token streams through a general purpose compression algorithm before storing them.
There is a "keep doing what you're doing, as we would want one of our companies to be on top of the AI race" signal from the governments. It could've been stopped, maybe, 5 years ago. But now we're way past it, so nobody cares about these sort of arguments.