Chinese labs are the only game in town for capable open source LLMs (gpt-oss is just not good). There have been talks multiple times by U.S China hawk lawmakers about banning LLMs made by Chinese labs.

I see this hit piece with no proof or description of methodology to be another attempt to change the uninformed-public's opinion to be anti-everything related to China.

Who would benefit the most if Chinese models were banned from the U.S tech ecosystem? I know the public and startup ecosystem would suffer greatly.

> Who would benefit the most if Chinese models were banned from the U.S tech ecosystem? I know the public and startup ecosystem would suffer greatly.

Ideally, gpt-oss or other FLOSS models that aren't Chinese.

Ideally. Probably won't turn out that way but I don't think we have to really worry about it coming to that.

It’s all open source and even their methods are published. Berkeley could replicate the reasoning principle of R1 with 30$ compute budget. Open-R1 aims to fully replicate R1 results with published methods and recipes. Their distill results look already very impressive. All these open source models are based on Meta Llama and open to everyone. Why should western labs and universities not be able to continue and innovate with open source models?

I don’t see why we have to rely on China. Keeping the open source projects open is however extremely important. And for that we should fight. Not chasing conspiracy theories or political narratives.

https://github.com/huggingface/open-r1

1) Because open-r1 is a toy - a science fair project - not something anyone actually uses. 2) It's based on the techniques described in the R1 paper

The entire open ecosystem in the U.S relies on the generosity of Chinese labs to share their methods in addition to their models.

So, you don’t even know (or don’t want to admit) that:

- their models and all other open source models are based on Llama of Meta? Or is that a Chinese lab? Yes, Mark’s wife is Vietnamese-Chinese so maybe you will say that :D

- and that they extracted (distilled) data from OpenAI ChatGPT contravene to the very terms of usage. Even now, when asked DeepSeek often say “I’m ChatGPT, your helpful assistant …”

- in science, there is no generosity as you described. You publish or you perish. Everyone needs cross-validation and learn from the others.