It’s all open source and even their methods are published. Berkeley could replicate the reasoning principle of R1 with 30$ compute budget. Open-R1 aims to fully replicate R1 results with published methods and recipes. Their distill results look already very impressive. All these open source models are based on Meta Llama and open to everyone. Why should western labs and universities not be able to continue and innovate with open source models?
I don’t see why we have to rely on China. Keeping the open source projects open is however extremely important. And for that we should fight. Not chasing conspiracy theories or political narratives.
1) Because open-r1 is a toy - a science fair project - not something anyone actually uses. 2) It's based on the techniques described in the R1 paper
The entire open ecosystem in the U.S relies on the generosity of Chinese labs to share their methods in addition to their models.
So, you don’t even know (or don’t want to admit) that:
- their models and all other open source models are based on Llama of Meta? Or is that a Chinese lab? Yes, Mark’s wife is Vietnamese-Chinese so maybe you will say that :D
- and that they extracted (distilled) data from OpenAI ChatGPT contravene to the very terms of usage. Even now, when asked DeepSeek often say “I’m ChatGPT, your helpful assistant …”
- in science, there is no generosity as you described. You publish or you perish. Everyone needs cross-validation and learn from the others.