I just left a job for a German B2B software company which sold primarily to large automotive, defense, and aerospace companies. Several of our customers specifically banned anything with the word "DeepSeek" -- hosted or self-hosted.

There's still a lot of naivety on what the difference is between models and platforms, and its easier for a lot of these big companies to just make a blanket statement like "nothing DeepSeek" than for their procurement teams to try to understand and negotiate with each vendor. They don't see the potential benefit over the potential risk of somebody misinterpreting or getting it wrong, so they outright ban it.

Most people that approve or buy software simply also just don't understand how models are being trained or if it's possible/how far a model could go to "introduce backdoors." A backdoor could be, from a business perspective, a model which has been trained to give answers that could hurt western business in a "strict text mode" or produces payloads in a programmatic mode that are intentionally trained to introduce software vulnerabilities.

Anyone can make arguments against these for a variety of reasons (looking at the transparency of both sides and comparing, etc) but for many reasons today and for better or worse, many Chinese models are being banned on big software contracts, which gets back to the title of the article

Thing is these models can also be a propaganda machine whether you run it locally or not. This is true no matter the origins. Chinese LLMs will never shit-talk CCP, and it will always give a rosy depiction of the Chinese government. It's perfectly understandable if companies don't want things like that. US/EU models have these problems too, but at least there are some ways to fight that: with a lawsuit or a megaphone on social networks. With Chinese models there is nothing you can do.

You are sending all your prompts code and files there. So ofcourse its an issue

Where's "there" on a self-hosted setup?