> "Open weights" borrows the legitimacy of open source

I don't really see how open-weights models need to borrow any legitimacy. They are valuable artifacts being given away that can be used, tested and repurposed forever. Fully open models like the OLMo series and Nvidia's Nemotron are much more valuable in some contexts, but they haven't quite cracked the level of performance that the best open-weights models are hitting. And I think that's why most startups are reaching for Chinese base LLMs when they want to tune custom models: the performance is better and they were never going to bother with pretraining anyway.