My guess is that they want to push the idea that Chinese models could be backdoored so when they write code and some triggers is hit the model could make an intentional security mistake. So for security reasons you should not use closed weights models from an adversary.
Even open weights models would be a problem, right? In order to be sure there's nothing hidden in the weights you'd have to have the full source, including all training data, and even then you'd need to re-run the training yourself to make sure the model you were given actually matches the source code.
Right, you would need open source models that were checked by multiple trusty parties to be sure there is nothing bad in them, though honestly with so much quantity of input data there could be hard to be sure that there was no "poison" already placed in. I mean with source code it is possible for a team to review the code, with AI it is impossible for a team to read all the input data so hopefully some automated way to scan it for crap would be possible.