It's not really the same because the provider in this case isn't necessarily shipping a traditional service, they're shipping intelligence. We've confused APIs as the end-state for providers. Providers are going to eat every abstraction along the way in their delivery of intelligent capabilities. Claude Code is just the start. A true agentic intelligent capability that shifts a paradigm for ways of working. It will evolve into Claude Agent for general-purpose digital work.

There's a lot of talk around economics. What is going to be more economic than a provider building abstractions/margin-optimizations around the tokens, and shipping directly to consumer. Vs token arbitrage.

Lastly, there's a lot of industry hype and narrative around agents. In my opinion, Claude Code is really the only effective & actual agent; the first born. It shows that Anthropic is a signaling that the leading providers will no longer just train models. They are creating intelligent capabilities within the post training phases / in RL. They are shipping the brain and the mech suit for it. Hence, eat the stack. From terminal to desktop, eventual robotics.

> Providers are going to eat every abstraction along the way in their delivery of intelligent capabilities. [...] There's a lot of talk around economics. What is going to be more economic than a provider building abstractions/margin-optimizations around the tokens, and shipping directly to consumer. Vs token arbitrage.

The strongman counter-argument would be that specialized interfaces to AI will always require substantial amounts of work to create and maintain.

If true, then similar to Microsoft, it might make more financial sense for Anthropic et al. to cede those specialized markets to others, focus on their core platform product, take a cut from many different specialized products, and end up making more as the addressable market broadens.

The major AI model providers substantially investing in specialized interfaces would suggest they're pessimistic about revolutionary core model improvements and are thus looking to vertically integration to preserve margin / moat.

But relatively speaking, it doesn't seem like interfaces are being inordinately invested in, and coding seems such an obvious agentic target (and dogfoodable learning opportunity!) that it shouldn't prompt tea leaf reading.

> It shows that Anthropic is a signaling that the leading providers will no longer just train models.

I think it instead (or also?) shows a related but orthogonal signal: that the ability and resources to train models are a strong competitive advantage. This is most obvious with deep research and I haven’t seen any wrapper or open source project achieve anywhere near the same quality as Gemini/Claude deep research, but Claude Code is a close runner up.