All of the reasons in the article also apply to Chinese companies. If a Chinese model becomes good enough to make it significantly easier to hack Chinese government servers, do you think they'll allow random people unfettered access to it?

The economic pressures are the same, too. Currently, Chinese models are offered for cheap or in some cases provide weights for free because that's the only way to gain traction. (That closed-weight releases by Baidu, Bytedance, iFlyTek etc. hardly generate any buzz bears that out, as does the fact that when Alibaba does a closed-weight release, someone always gets confused because they associate the Qwen brand with open models.) At some point, their investors are going to want profits, not just user counts. That means higher prices, or no more new models.

If there's no secret sauce and all you need is scale, that would actually be kind of the worst-case scenario for catching up to the frontier, since scaling is expensive and the frontier model companies have easier access to capital as well as higher revenues.

> If a Chinese model becomes good enough to make it significantly easier to hack Chinese government servers, do you think they'll allow random people unfettered access to it?

They aren't trying to become that good, nor do they need to in order to have real positive impact. Models like Mythos are estimated to be humongous even on a datacenter-wide scale, which is actually a big factor in its limited availability at present. It's mostly helpful as a one-of-a-kind proof of concept, to answer the question of whether AI can still plausibly scale by growing capabilities and what happens to alignment concerns when you do that.

I expect every company to try to make a model as good as they possibly can, especially now that Mythos has served as a proof of concept to demonstrate that there's lots of interest in AI for cybersecurity. But if they don't try, that hardly assuages concerns about not being able to access the very best models, does it?