1) Because open-r1 is a toy - a science fair project - not something anyone actually uses. 2) It's based on the techniques described in the R1 paper

The entire open ecosystem in the U.S relies on the generosity of Chinese labs to share their methods in addition to their models.

So, you don’t even know (or don’t want to admit) that:

- their models and all other open source models are based on Llama of Meta? Or is that a Chinese lab? Yes, Mark’s wife is Vietnamese-Chinese so maybe you will say that :D

- and that they extracted (distilled) data from OpenAI ChatGPT contravene to the very terms of usage. Even now, when asked DeepSeek often say “I’m ChatGPT, your helpful assistant …”

- in science, there is no generosity as you described. You publish or you perish. Everyone needs cross-validation and learn from the others.