For coding often quality at the margin is crucial even at a premium. It’s not the same as cranking out spam emails or HN posts at scale. This is why the marginal difference between your median engineer and your P99 engineer is comp is substantial, while the marginal comp difference between your median pick and packer vs your P99 pick and packer isn’t.
I’d also say it keeps the frontier shops competitive while costing R&D in the present is beneficial to them in forcing them to make a better and better product especially in value add space.
Finally, particularly for Anthropic, they are going for the more trustworthy shop. Even ali is hosting pay frontier models for service revenue, but if you’re not a Chinese shop, would you really host your production code development workload on a Chinese hosted provider? OpenAI is sketchy enough but even there I have a marginal confidence they aren’t just wholesale mining data for trade secrets - even if they are using it for model training. Anthropic I slightly trust more. Hence the premium. No one really believes at face value a Chinese hosted firm isn’t mass trolling every competitive advantage possible and handing back to the government and other cross competitive firms - even if they aren’t the historical precedent is so well established and known that everyone prices it in.
I just assume any of those companies would steal my work and wouldn't care about it.
Everything they have done so far indicates this.
Running your own is the only option unless you really trust them or unless you have the option to sue them like some big companies can.
Or if you don't really care then you can use the chineese one since it is cheaper.
What makes you trust Anthropic more than Alibaba?
There’s a difference between stealing for model training and direct monitoring of actionable trade secrets and corporate espionage. Anthropic and OpenAI wouldn’t do this simply because they would be litigated out of existence and criminally investigated if they did. In China it’s an expected part of the corporate and legal structure with virtually no recourse for a foreign firm and when it’s in states interest domestic either. I’m surprised you don’t realize the US has fairly strong civil, criminal, and regulatory protections in place for theft of actionable material and reuse of corporate and trade secrets, let alone copyright materials. I assure you their ToS also do not allow them to do this and that in itself is a contractual obligation you can enforce and win in court.
Anthropic already admitted to heavily monitoring user requests to protect against distillation. They have everything in place, turning on learning from user data would literally be just a couple lines of code at this point. Anyone trusting them not to do it is a fool.
Absolutely. Plus as these companies become hungrier for revenue and to get out of the commodity market they are in, they are only going to get more aggressive in their (ab)use of customer data.
How exactly do you propose that a local weights model that I can run without an internet connection is going to exfiltrate my trade secrets to the Chinese government?
If you read I’m talking about their service only models.
Why? No one else was. The discussion was about OpenAI / Anthropic's lack of moat when there are open weights models that are almost as good. You can host them anywhere you like. Pay a US company to do so if you want.
> For coding often quality at the margin is crucial even at a premium
That's a cryptic way to say "Only for vibe-coding quality at the margin matters". Obviously, quality is determined first and foremost by the skills of the human operating the LLM.
> No one really believes at face value a Chinese hosted firm isn’t mass trolling every competitive advantage possible
That's much easier to believe than the same but applied to a huge global corp that operates in your own market and has both the power and the desire to eat your market share for breakfast, before the markets open, so "growth" can be reported the same day.
Besides, open models are hosted by many small providers in the US too, you don't have to use foreign providers per se.
1) model provider choices don’t obviate the need to make other good choices
2) I think there is a special case for Chinese providers due to the philosophical differences in what constitutes fair markets and the regulatory and civil legal structure outside China generally makes such things existentially dangerous to do; hence while it might happen it is extraordinarily ill advised, while in China is implicitly the way things work. However my point is Ali has their own hosted version of Qwen models operating on the frontier that are at minimum hosted exclusively before released. Theres no reason to believe they won’t at some point exclusively host some frontier or fine tuned variants for purposes for commercial reasons. This is part of why they had recent turnover.
Most code is not P99 though.
Also, have you considered that your trust in Anthropic and distrust in China may not be shared by many outside the US? There's a reason why Huawei is the largest supplier of 5G hardware globally.
You're right, but perspective is important, and that's because China and the US are engaged in economic warfare (even before the current US regime), vying for the dubious title of "superpower".
I find it hard to believe anyone who has ever done business inside China doesn’t know that the structure of Chinese business is built around massive IP theft and repurposing on a state wide systematic level. It’s not a nationalism point, it’s an objective and easily verified truth.
Most code is not P99, but companies pay a premium to produce code that is. That’s my point.
I'll ask you the same thing I asked the other guy. How is a an open weights model that I can run on my own hardware without an internet connection going to exfiltrate my trade secrets to the Chinese government?
It's the same user and they already answered you: "If you read I’m talking about their service only models."
But yes this is a non-sequitor. The original question was "What competitive advantage does OpenAI/Anthropic has when companies like Qwen/Minimax/etc are open sourcing models that shows similar (yet below than OpenAI/Anthropic) benchmark results?"
Even if you don't trust Chinese companies, and you want a hosted model, you can always pay a third party to host a Chinese open weight model. And it'll be a lot cheaper than OpenAI.
Chinese companies are built on IP theft, and Anthropic/Open AI are not?
And in world where code generation costs are trending to zero, goodluck commanding a premium to produce any kind of code.
There is a whole bunch of P99 code that is open-source. What makes code P99 is not the model that produces it, but the people who verify/validate/direct it.
> For coding often quality at the margin is crucial even at a premium
For some problems, sure, and when you are stuck, throwing tokens at Opus is worthwhile.
On the other hand, a $10/month minimax 2.7 coding subscription that literally never runs out of tokens will happily perform most day-to-day coding tasks
"Literally never runs out of tokens?" lol, no. Tokens are just energy. There is always a way to run out of tokens, and no one will subsidize free tokens forever.
"Never runs out of tokens" in the sense that running 8 hours a day 7 days a week is still under the subscription limit
if you run it at home then the sun is a pretty good way to get "free energy."
Why pay for two subscriptions though?
Claude also has other models which use less tokens.
Given the very limited experience I have where I've been trying out a few different models, the quality of the context I can build seems to be much more of an issue than the model itself.
If I build a super high quality context for something I'm really good at, I can get great results. If I'm trying to learn something new and have it help me, it's very hit and miss. I can see where the frontier models would be useful for the latter, but they don't seem to make as much difference for the former, at least in my experience.
The biggest issue I have is that if I don't know a topic, my inquiries seem to poison the context. For some reason, my questions are treated like fact. I've also seen the same behavior with Claude getting information from the web. Specifically, I had it take a question about a possible workaround from a bug report and present it as a de-facto solution to my problem. I'm talking disconnect a remote site from the internet levels of wrong.
From what I've seen, I think the future value is in context engineering. I think the value is going to come from systems and tools that let experts "train" a context, which is really just a search problem IMO, and a marketplace or standard for sharing that context building knowledge.
The cynic in me thinks that things like cornering the RAM market are more about depriving everyone else than needing the resources. Whoever usurps the most high quality context from those P99 engineers is going to have a better product because they have better inputs. They don't want to let anyone catch up because the whole thing has properties similar to network effects. The "best" model, even if it's really just the best tooling and context engineering, is going to attract the best users which will improve the model.
It makes me wonder of the self reinforced learning is really just context theft.
Not sure how your last point matters if 27b can run on consumer hardware, besides being hosted by any company which the user could certainly trust more than anthropic.
OpenAI & Anthropic are just lying to everyone right now because if they can't raise enough money they are dead. Intelligence is a commodity, the semiconductor supply chain is not.
The challenge is token speed. I did some local coding yesterday with qwen3.6 35b and getting 10-40 tokens per second means that the wall time is much longer. 20 tokens per second is a bit over a thousand tokens per minute, which is slower than the the experience you get with Claude Code or the opus models.
Slower and worse is still useful, but not as good in two important dimensions.
Also benchmark measures are not empirical experience measures and are well gamed. As other commenters have said the actual observed behavior is inferior, so it’s not just speed.
It’s ludicrous to believe a small parameter count model will out perform a well made high parameter count model. That’s just magical thinking. We’ve not empirically observed any flattening of the scaling laws, and there’s no reason to believe the scrappy and smart qwen team has discovered P=NP, FTL, or the magical non linear parameter count scaling model.
Ooh, car analogy time!
It's kinda like saying a car with a 6L engine will always outperform a car with a 2L engine. There are so many different engineering tradeoffs, so many different things to optimize for, so many different metrics for "performance", that while it's broadly true, it doesn't mean you'll always prefer the 6L car. Maybe you care about running costs! Maybe you'd rather own a smaller car than rent a bigger one. Maybe the 2L car is just better engineered. Maybe you work in food delivery in a dense city and what you actually need is a 50cc moped, because agility and latency are more important than performance at the margins.
And if you're the only game in town, and you only sell 6L behemoths, and some upstart comes along and starts selling nippy little 2L utility vehicles (or worse - giving them away!) you should absolutely be worried about your lunch. Note that this literally happened to the US car industry when Japanese imports started becoming popular in the 80s...
This is just blind belief. The model discussed in this topic already outperforms “well made” frontier LLMs of 12-18 months ago. If what you wrote is true, that wouldn’t have been possible.
It's amazing that we can run models better than state of the art ~36 months ago on local consumer devices!
> This is why the marginal difference between your median engineer and your P99 engineer is comp is substantial, while the marginal comp difference between your median pick and packer vs your P99 pick and packer isn’t.
That's an interesting analogy.
>but if you’re not a Chinese shop, would you really host your production code development workload on a Chinese hosted provider?
The point of open source models is that you host them locally. I trust neither Chinese nor American providers with this.
Are you claiming that major Chinese cloud providers like Tencent and Alibaba are pilfering trade secrets from their customers' data? To my knowledge, there's no evidence for that whatsoever. If it were true and came out, it would instantly tank their cloud businesses (which is why they don't do it, and why AWS, Azure, etc. also don't do it).
If it were to happen, Chinese law does offer recourse, including to foreign firms. It's not as if China doesn't have IP law. It has actually made a major effort over the last 10+ years to set up specialized courts just to deal with IP disputes, and I think foreign firms have a fairly good track record of winning cases.
> No one really believes at face value
This says a lot more about the prejudices and stereotypes in the West about China than it does about China itself.
In every one of these threads for a new Chinese open weights model, it's always the same tired discussion of how this is all actually a psyop by the Chinese government to undermine US interests and it can't answer questions about Tienanmen Square.
Meanwhile I'm over here solving real world business problems with a model that I can securely run on-prem and not pay out the nose for cloud GPU inference. And then after work I use that same model to power my personal experiments and hobby projects.
There are no Chinese labs with different financial and political motivations, there's only "China" the monolith. The last thread for Qwen's new hosted model was full of folks talking about how "China" is no longer releasing open weights models, when the next day Moonshot AI releases Kimi 2.6. A few days later and here's Qwen again with another open release.
For some reason this country gets what I assume are otherwise smart Americans to just completely shut off their brains and start repeating rhetoric.
> but if you’re not a Chinese shop, would you really host your production code development workload on a Chinese hosted provider?
As opposed to an US-american shop? Yup, sure, why not? It's the same ballpark.
> For coding often quality at the margin is crucial even at a premium.
For coding, quality is not measurable and is based entirely on feels (er, sorry, "vibes").
Employers paying for SOTA models is nothing but a lifestyle status perk for employees, like ping-pong tables or fancy lunch snacks.
I’m building my own company and I consider model choice crucial to my marginal ability to produce a higher quality product I don’t regret having built. Every higher end dev shop I’ve worked at over the last few years perceives things the same. There are measurable outcomes from software built well and software not, even if the code itself isn’t easily measurable. I would rather pay a few thousand more per year for a better overall outcome with less developer struggle against bad model decisions than end up with an inferior end product and have expensive developer spin wheels containing a dumb as a brick model. But everyone’s career experiences are different and I’d feel sad to work at a place where SOTA is a lifestyle choice rather than a rational engineering and business choice.
"Rational engineering and business choice" and "AI" are two words that do not go together.
Wait five years and come back. Right now AI is 100% FOMO and lifestyle signaling and nothing more.
"based entirely on feels"
Now there's a word I haven't heard in a long, long time.