I don't know, it seems like I'm the minority but I like Cursor. I think it adds value beyond the terminal style editors. Yes it relies on the Claude model but I get a lot of value from the visual component, history, auto-complete, etc.
Couldn't you make the same argument around something like S3? How many companies are basically S3 wrappers? Or companies that use general AWS infra and make it slightly better. There could still be a market for add on products. Why would Claude or OpenAI want the headache of managing an IDE? They're okay giving up some margin there.
I agree there is a huge rush of "AI wrapper" companies, whose moat is basically prompt engineering. Like a "AI buddy" or whatever. Those are all going to zero IMO. But things like Cursor have a future. Maybe not at the hyped valuation but long term something like this will exist
I’d love for someone to try to define “AI wrapper”.
I’m trying to imagine a graph where at some point in time t
the status of a company changes from “wrapper” (not enough “original” engineering)
to “proper company” (they own the IP, and they fought for it!!!)
At what point did OpenAI cease being an NVIDIA wrapper and become the world’s leading AI lab? At what point did NVIDIA graduate from being a TSMC wrapper?
Clearly any company that gets TSMC N2 node allocation is going to win, the actual details of the chip don’t matter super much.
'Wrapper' in this instance is your primary source of value is a prompt.
I think you can think of it as how long would it take someone to come up with the product given enough information about the product.
Take for instance an app that is a "companion" app. It's simplest form is a prompt + LLM + interface. They don't own the LLM so they have the prompt and interface. The prompt is simple enough to figure out (often by asking the app in a clever way) so the interface is left. How easy is it to replicate? If it's like chatgpt, pretty easy.
Now there are a few complications. Suppose there are network effects (Instagram is a wrapper around a protocol), but the network effects are the value. And LLM wrapper can create network effects (maybe there is a way to share or something) but difficult.
OpenAI is not a wrapper on NVIDIA because it would take billions of dollars to train the LLM with the NVIDIA chips (in energy). It would take me a weekend to recreate a GPT wrapper or just fork an open source implementation. There is also institutional knowledge (which is why Meta is offering 1bn+ for a single eng). Or take something like Excel. People know how it works, people have dissected it endlessly. But the cost to recreate even with perfect knowledge is very high, plus there is network effects.
I guess the point I was trying to make is that this:
Is a fast moving target, and the time gets shorter as more money and knowledge gets involved. Put another way: that $1b for talent or chips is a fast depreciating asset.Taking a shot at this, one concrete definition might be that the business model is essentially white labeling, that is, the base LLM is rebranded, but task performance in the problem domain is not functionally improved in some measurable way. As a corollary, it means the user could receive the same value if they had gone straight to the base LLM provider.
I think this might be more narrow than most uses of the term “wrapper” though.
Every company you mentioned is just a wrapper around elemental carbon and silicon.
Claude Code with current IDE integration is already very good. Only thing missing is completion that Cursor is pretty good at.
For me VScode with Github Copilot + Claude Code hits the sweet spot
Copilot sucks. Supermaven makes it look like a joke.
I'm super excited to try the Cursor CLI https://cursor.com/cli
Claude Code has always been unparalleled. It's almost as if other AI CLI devs have no idea what they're doing.
It's not really the same because the provider in this case isn't necessarily shipping a traditional service, they're shipping intelligence. We've confused APIs as the end-state for providers. Providers are going to eat every abstraction along the way in their delivery of intelligent capabilities. Claude Code is just the start. A true agentic intelligent capability that shifts a paradigm for ways of working. It will evolve into Claude Agent for general-purpose digital work.
There's a lot of talk around economics. What is going to be more economic than a provider building abstractions/margin-optimizations around the tokens, and shipping directly to consumer. Vs token arbitrage.
Lastly, there's a lot of industry hype and narrative around agents. In my opinion, Claude Code is really the only effective & actual agent; the first born. It shows that Anthropic is a signaling that the leading providers will no longer just train models. They are creating intelligent capabilities within the post training phases / in RL. They are shipping the brain and the mech suit for it. Hence, eat the stack. From terminal to desktop, eventual robotics.
> Providers are going to eat every abstraction along the way in their delivery of intelligent capabilities. [...] There's a lot of talk around economics. What is going to be more economic than a provider building abstractions/margin-optimizations around the tokens, and shipping directly to consumer. Vs token arbitrage.
The strongman counter-argument would be that specialized interfaces to AI will always require substantial amounts of work to create and maintain.
If true, then similar to Microsoft, it might make more financial sense for Anthropic et al. to cede those specialized markets to others, focus on their core platform product, take a cut from many different specialized products, and end up making more as the addressable market broadens.
The major AI model providers substantially investing in specialized interfaces would suggest they're pessimistic about revolutionary core model improvements and are thus looking to vertically integration to preserve margin / moat.
But relatively speaking, it doesn't seem like interfaces are being inordinately invested in, and coding seems such an obvious agentic target (and dogfoodable learning opportunity!) that it shouldn't prompt tea leaf reading.
> It shows that Anthropic is a signaling that the leading providers will no longer just train models.
I think it instead (or also?) shows a related but orthogonal signal: that the ability and resources to train models are a strong competitive advantage. This is most obvious with deep research and I haven’t seen any wrapper or open source project achieve anywhere near the same quality as Gemini/Claude deep research, but Claude Code is a close runner up.
Have you tried Claude code vscode plugin? It's has almost everything cursor has to offer
I've used Claude Code but not the vs code plugin. I get enough value from the auto-complete that I'll use Cursor regardless, but I don't think it's worth $20 for that.
But now it's subsidized so I easily spend over $50 of Claude credits for my $20 in Cursor.
Also the ability to swap out models is a big value add and I don't have to worry about latest and greatest. I switch seamlessly. Something comes out, next day its on Claude. So now I'm using GPT which is less than half the price. I don't want to have to think about it or constantly consider other options. I want a standardized interface and plug in whatever intelligence I want. Kind of like a dropbox that can worry about whether they store in AWS, Azure or GCP depending which one is the best value prop.