You're not wrong, but the second claim is by far the more interesting of the two, and is what I think most people would like to see proven. AI outright refusing certain tasks based on filters set by the parent company is not really new or interesting, but it would be interesting to see an AI knowingly introduce security flaws in generated code specifically for targeted groups.
I don't disagree. The second is more concerning but I do think the first is interesting. At least in how cultural values and laws pass beyond country borders. Far less concerning but still interesting.
But what are you attacking my claim for? That I'm requesting people don't have knee-jerk reactions and for help vetting the more difficult claim? Is this wrong? I'm not trying to make the claim that it does or doesn't write insecure code (or less secure code) for specific groups. I've also made the claim in another comment that there are non-nefarious explanations to how this could happen.
I'm not trying to make a stance of "China bad, Murica good" or vise versa, I'm trying to make a stance of "let's try to figure out if true or not. How much is it true? How much is it false?" So would you like to help or would you like to create more noise?
For the record I never attacked your claim, I'm not the original person that said it was wrong.