There was that study by anthropic that showed that an LM fine-tuned on insecure code with no additional separate prompting or fine-tuning would be more willing to act unethically. So maybe this is the equivalent in that the corpus of training data for deep-seek presumably is very biased against certain groups, resulting in less secure code for disfavored groups.

Yeah tbh I can see this happening unintentionally. Like DeepSeek trying to censor Falun Gong and getting these results. But tbh, I think it is concerning in either case. It is a difference between malice and unintended mistakes through trying to move too fast. Both present high risks, and neither is unique to China nor DeepSeek.

But most of all, I'm trying to get people to not just have knee-jerk reactions. We can do some vetting very quickly, right? So why not? I'm hoping better skilled people will reply to my main comment with evidence for or against the security claim, but at least I wanted to suppress this habit we have of just conjecturing out of nothing. The claims are testable, so let's test instead of falling victim to misinformation campaigns. Of all places, HN should be better