The moat is in:
1. Opus and Sonnet.
2. Compute capacity. Anthropic has much more of it than your average coding startup.
3. The developing ecosystem around Claude Code.
The moat is in:
1. Opus and Sonnet.
2. Compute capacity. Anthropic has much more of it than your average coding startup.
3. The developing ecosystem around Claude Code.
I don’t think Opus and Sonnet are significantly better than Gemini or ChatGPT. Am I missing something?
It looks to me that Anthropic is one or two Gemmas away from a lot of people using Opus for 20% of hard use cases and letting on-device LLM rip through the code base on a Mac Mini or Studio and OpenCode.
Once Claude Code is not the only game in town and Cowork is made redundant by Google pulling their finger out on integration with Workspace, what else is there for Anthropic?
On-device agentic use is orders of magnitude harder than simple chatting (which is still slow for SOTA), it uses up a huge amount of context and tokens on reading code and reasoning through it. It's sort of viable if you just set it to work overnight on some completely vibe-coded stuff, but that has very middling results. Giving feedback to the model interactively is completely out of the question.
Where open models can make a difference for agentic use is with third-party inference at scale, which can actually be fast enough for reasonable workflows.
none of the three are even remote moat
How so? Opus and Sonnet are frontier models which cannot easily be replicated. Compute has real physical constraints which require appropriate procurement at this scale. At least those two points seem like pretty strong moats against the majority of companies.
You don't need to "replicate" Opus and Sonnet, you just need to match their overall performance at lower cost. That's been absolutely doable so far, with a steadily decreasing lag time.
You're right and the your reasoning is great. Anthropic should fold and give up their $30 billion ARR just announced in the OP. Shut it all down, no moat here.
/s