It's. not. open. source!

https://www.downloadableisnotopensource.org/

Open source is a crazy new beast in the AI/ML world.

We have numerous artifacts to reason about:

- The model code

- The training code

- The fine tuning code

- The inference code

- The raw training data

- The processed training data (which might vary across various stages of pre-training and potentially fine-tuning!)

- The resultant weights

- The inference outputs (which also need a license)

- The research papers (hopefully it's described in literature!)

- The patents (or lack thereof)

The term "open source" is wholly inadequate here. We need a 10-star grading system for this.

This is not your mamma's C library.

AFAICT, DeepSeek scores 7/10, which is better than OpenAI's 0/10 (they don't even let you train on the outputs).

This is more than enough to distill new models from.

Everybody is laundering training data, and it's rife with copyrighted data, PII, and pilfered outputs from other commercial AI systems. Because of that, I don't expect we'll see much legally open training data for some time to come. In fact, the first fully open training data of adequate size (not something like LJSpeech) is likely to be 100% synthetic or robotically-captured.

Https://opensource.org/ai ... Lots of reasoning has been done on those artifacts

I think you‘re trying to make it look more complex than it is. Put the amount of data next to every entry in that list of yours.

Most of those items map to a job description.

If you think the data story isn't a complicated beast, then consider:

If you wanted an "open" dataset, would you want it before or after it was processed? There are a lot of cleaning, categorizing, feature extraction steps. The data typically undergoes a lot of analysis, extra annotation, bucketing, and transformation.

If the pre-train was done in stages, and the training process was complicated, how much hand-holding do you need to replicate that process?

Do you need all of the scripts to assist with these processes? All of the infra and MLOps pieces? There's a lot of infrastructure to just move the data around and poke it.

Where are you going to host those terabytes or petabytes of data? Who is going to download it? How often? Do you expect it to be downloaded as frequently as the Linux kernel sources?

Did you scrub it of PII? Are you sure?

And to clarify, we're not even talking about trained models at this point.

I'd argue we don't need a 10 star system. The single bit we have now is enough. And the question is also pretty clear: did $company steal other peoples work?

The answer is also known. So the reason one would want an open source model (read reproducible model), would be that of ethics

We use pop-cultural references to communicate all the time these days. Those don't necessarily come from only the most commonly known sections of these works, so the AI would necessarily need the full work (or a functional transformation of the work) to be able to hit the theoretical maximum of the ability to decode about and reason using such references. To exclude copyrighted works from the training set is to expect it to decode from the outside what amounts to humanity's own in-group jokes.

That's my formal argument. The less formal one is that copyright protection is something that smaller artists deserve more than rich conglomerates, and even then, durations shouldn't be "eternity and a day". A huge chunk of what is being "stolen" should be in the commons anyway.

"Your honor, if I hadn't robbed that bank I wouldn't have gotten all that money!"

I truthfully cannot think of a single model that satisfies your criteria.

And if we wait for the the internet to be wholly eaten by AI, if we accept perfect as the enemy of good, then we'll have nothing left to cling to.

> And the question is also pretty clear: did $company steal other peoples work?

Who the hell cares? By the time this is settled - and I'd argue you won't get a definitive agreement - the internet will be won by the hyperscalers.

Accept corporate gifts of AI, and keep pushing them forward. Commoditize. Let there be no moat.

There will be infinite synthetic data available to us in the future anyway. And none of this bickering will have even mattered.

"knowing why a model refuses to answer something matters"

The companies that create these models cant answer that question! Models get jailbroken all the time to ignore alignment instructions. The robust refusal logic normally sits on top of the model, ie looking at the responses and flagging anything that they don't want to show to users.

The best tool we have for understanding if a model is refusing to answer a problem or actually doesn't know is mechanistic interp, which you only need the weights for.

This whole debate is weird, even with traditional open source code you cant tell the intent of a programmer, what sources they used to write that code etc.

it's got more 'source' than whatever OpenAI provides for their models.

less alcoholic beverages are fully alcoholic beverages

0.5% or 0.03% satisfy my "nonalcoholic" criteria.

> Studies have found ethanol levels in commercial apple juice ranging from 0.06 to 0.66 grams per liter, with an average around 0.26 grams per liter[1]

Even apple juice is an alcoholic drink if you push your criteria to absurdity.

[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC5421578/

but they're not bleach, and no amount of adding or removing alcohol can transmute the alcohol into something else.

No it doesn't, it has exactly the same source, zero. It has more downloadable binary.

That’s the ‘source’ for what the model spits out though, if not the source for what spits out the model.

It is just freeware, not open source.

The "source" for something is all the stuff that makes you able to build and change that something. The source for a model is all the stuff that makes you able to train and change the model.

Just because the model produces stuff doesn't mean that's the model's source, just like the binary for a compiler isn't the compiler's source.

Ok

https://huggingface.co/deepseek-ai/DeepSeek-R1-0528/blob/mai...

Slapping an MIT license on a compiled binary doesn't make it open source.

They're keeping some stuff to themselves which is fine. I don't expect anyone to have to fully release everything they've got especially considering the vast costs associated with researching and developing these models.

What they have released has been distilled into many new models that others have been using for commercial benefit and I appreciate the contributions that they have made.

> I don't expect anyone to have to fully release everything they've got

I also don't expect Microsoft to release their full Windows 11 source code, but that also means it's not open source. And that's okay, because Microsoft doesn't call it open source.