There's something heartwarming about the developer docs being released before the flashy press release.

Their audience is people who build stuff, techs audience is enterprise CEOs and politicians, and anyone else happy to hype up all the questionably timed releases and warnings of danger, white collar irrelevence, or promises of utopian paradise right before a funding round.

now that we can use AI to write the docs , test the docs , proof read the docs , it really isn't that much of a feat right?

[deleted]

Insert obligatory "this is the way" Mando scene. Indeed!

Where's the training data and training scripts since you are calling this open source?

Edit: it seems "open source" was edited out of the parent comment.

doesn't it get tiring after a while? using the same (perceived) gotcha, over and over again, for three years now?

no one is ever going to release their training data because it contains every copyrighted work in existence. everyone, even the hecking-wholesome safety-first Anthropic, is using copyrighted data without permission to train their models. there you go.

There is an easy fix already in widespread use: "open weights".

It is very much a valuable thing already, no need to taint it with wrong promise.

Though I disagree about being used if it was indeed open source: I might not do it inside my home lab today, but at least Qwen and DeepSeek would use and build on what eg. Facebook was doing with Llama, and they might be pushing the open weights model frontier forward faster.

> There is an easy fix already in widespread use: "open weights"

They're both correct given how the terms are actually used. We just have to deduce what's meant from context.

There was a moment, around when Llama was first being released, when the semantics hadn't yet set. The nutter wing of the FOSS community, to my memory, put forward a hard-line and unworkable definition of open source and seemed to reject open weights, too. So the definition got punted to the closest thing at hand, which was open weights with limited (unfortunately, not no) use restrictions. At this point, it's a personal preference that's at most polite to respect if you know your audience has one.

Yeah, open weights is really good, especially when base models (not just the instruction tuned) weights are released like here.

Nvidia did with Nemo.

it's not a gotcha but people using words in ways others don't like.

It's not about likes, it's a flat out lie.

They are exactly open source. The training data is the internet. Don't say it's on the internet. It IS the internet.

The training scripts are in Megatron and vLLM.

Aww yes, let me push a couple petabytes to my git repo for everyone to download...

An easier thing would be to say "open weights", yes.

Weights are the source, training data is the compiler.

You got it the wrong way round. It's more akin to.

1. Training data is the source. 2. Training is compilation/compression. 3. Weights are the compiled source akin to optimized assembly.

However it's an imperfect analogy on so many levels. Nitpick away.

It's dataset [0] released under some source available license or OSI license, ie. open dataset or open source dataset.

[0] https://news.ycombinator.com/item?id=47758408