> That's a tautology. People think chinese models are 10x more efficient because they're 10x cheaper

They do have different infrastructure / electricity costs and they might not run on nvidia hardware.

It's not just the models.

Except there are providers that serve both chinese models AND opus as well. On the same hardware.

Namely, Amazon Bedrock and Google Vertex.

That means normalized infrastructure costs, normalized electricity costs, and normalized hardware performance. Normalized inference software stack, even (most likely). It's about a close of a 1 to 1 comparison as you can get.

Both Amazon and Google serve Opus at roughly ~1/2 the speed of the chinese models. Note that they are not incentivized to slow down the serving of Opus or the chinese models! So that tells you the ratio of active params for Opus and for the chinese models.

Deployments like bedrock have no where near SOTA operational efficiency, 1-2 OOM behind. The hardware is much closer, but pipeline, schedule, cache, recomposition, routing etc optimizations blow naive end to end architectures out of the water.

And Microsoft's Azure. It's on all 3 major cloud providers. Which tells me, they can make profit from these cloud providers without having to pay for any hardware. They just take a small enough cut.

https://code.claude.com/docs/en/microsoft-foundry

https://www.anthropic.com/news/claude-in-microsoft-foundry

AWS and GCP both have their own custom inference chips, so a better example for hosting Opus on commodity hardware would be Digital Ocean.

> Both Amazon and Google serve Opus at roughly ~1/2 the speed of the chinese models

We were responded about 10x not 0.5x.

x86 vs arm64 could have different performance. The Chinese models could be optimized for different hardware so it could show massive differences.

These providers do not run models on CPUs, x86 vs. Arm is irrelevant.

I mean GN has covered the Nvidia black market in China enough that we pretty much know that they run on Nvidia hardware still.

How is this related to the inference, may I ask? Except for some very hardware-specific optimizations of model architecture, there's nothing to prevent one to host these models on your own infrastructure. And that's what actually many OpenRouter providers, at least some of which are based in US, are doing. Because most of Chinese models mentioned here are open-weight (except for Qwen who has one proprietary "Max" model), and literally anyone can host them, not just someone from China. So it just doesn't really matter.

I mean sure, but in terms of cost per dollar/per watt of inference Nvidia's GPUs are pretty up there - unless China is pumping out domestic chips cheaply enough.

Also with Nvidia you get the efficiency of everything (including inference) built on/for Cuda, even efforts to catch AMD up are still ongoing afaik.

I wouldn't be surprised if things like DS were trained and now hosted on Nvidia hardware.

> unless China is pumping out domestic chips cheaply enough

They are. Nvidia makes A LOT of profit. Hey, top stock for a reason.

> I wouldn't be surprised if things like DS were trained and now hosted on Nvidia hardware

DS is "old". I wouldn't study them. The new 1s have a mandate to at least run on local hardware. There are data center requirements.

I agree it could still be trained on Nvidia GPUs (black market etc), but not running.

> The new 1s have a mandate to at least run on local hardware.

They do? Source?

But if that's true, it would explain why Minimax, Z.ai and Moonshot are all organized as Singaporean holding companies, with claimed data center locations (according to OpenRouter) in the US or Singapore and only the devs in China. Can't be forced to use inferior local hardware if you're just a body shop for a "foreign" AI company. ;)

> with claimed data center locations (according to OpenRouter) in the US or Singapore and only the devs in China

They just have a China only endpoint and likely a company under a different name.

Nothing to do with AI. TikTok is similar (global vs China operations).