Cursor imo is still one of the only real players in the space. I don’t like the claude code style of coding, I feel too disconnected. Cursor is the right balance for me and it is generally pretty darn quick and I only expect it to get quicker. I hope there are more players that pop up in this space.
Wild to me. When I switched from Cursor to Claude it only took me a day to realise that as things stand I would never use Cursor again.
We will all have different experiences and workflow but I am not sure why it’s wild. For myself I find tools like Claude code or codex have a place but it’s not me using the tool interactively. They are both too slow in the feedback loop and overly verbose that it’s hard at least for me to establish a good cadence for writing code.
I am also surprised by this. Especially because we can just run Claude Code from anywhere. Cursor, VS Code, Zed, emacs, vim, JetBrains, etc.
Cursor CLI, Codex and Gemini work too, but lag slightly behind in a variety of different ways that matter.
And if you think you’re getting better visual feedback through Cursor it’s likely you’re just not using Claude with your IDE correctly.
I mean the problem with Claude Code is you have to use Claude
The model most people have been recommending you use in Cursor for the last year or so… Which model do you find significantly better?
I've been using GPT-5, and hoping to use gpt-5-codex soon
Have you tried https://zed.dev ?
Yea and for some reason it was not my cup of tea. I think partly due to their paid version feels like an afterthought.
How is the pricing? I see it says "500 prompts a month" and only Claude. Cursor is built around token usage and distributes them across multiple models when you hit limits on one which turns out to be pretty economical.
Zed supports BYOK so you can connect it to GitHub Copilot or Anthropic or OpenRouter or whatever. The 500-per-month limit is only for Zed-hosted prompts. I personally switch between Zed-hosted, GH Copilot, Gemini API, and Ollama. Zed's AI integration isn't quite as "it just works" as Cursor's, but it's still very good and it gives you much more freedom.
You don't have to use the built in subscription. They have support for curated APIs (OpenAI, Anthropic, GitHub Copilot, OpenRouter, etc), any OpenAI-compatible API, and agents like Gemini CLI and Claude Code in the AI panel.
You're going to have to get used to feeling disconnected if you want to stay in the game, that's the direction this is heading (and fast). You need to move up the ladder.
Also, cursor is both overpriced and very mediocre as an agent. Codex is the way to go.
That is just a personal opinion, not a fact. Either option can be faster or more productive if it suits your personal coding style. I work with both, I also favor one. But money is not exactly an issue.
I get you have a financial incentive to say that but at least back it up. I do believe using ai tooling is here and now and a worthwhile endeavor but in my view we have not settled best practices yet and it depends on the individual preferences right now.
Tools are for us to figure out what works and what does not. Saying be prepared to be disconnected sounds like slop by someone getting forced into someone else’s idea.
If someone has a great workflow using a tool like codex that’s great but it does not mean it has to work for me. I love using codex for code reviews, testing and other changes that are independent of each other, like bugs. I don’t like using it for feature work, I have spent years building software and I am not going to twiddle my thumbs waiting for codex on something I am building real time. Now I think there is an argument that if you have the perfect blueprint of what to build that you could leverage a tool like codex but I am often not in that position.
AI coding tools right now are rudimentary, and when used properly they can already massively increase velocity and enable capabilities, and this isn't random boosterism this is based on pushing myself towards 100% AI generated code over the last year, and working to improve my throughput and reduce the error rate of my generated code. The AI coding tools industry is being led by a 23 year old with no software engineering or AI experience, that should tell you something about the hype vs rigor tradeoff that is being made.
Once we collectively start actually engineering AI coding systems rather than trying to surf on vibes their power will become much more apparent to the naysayers who haven't invested the time in the processes and tools.
As for backing it up, if you want to hop on my company github you can check out the spark graphs for my projects, and feel free to poke around the code I've spent time tightening to see that it's not slop (have fun with the async simd rust in valknut/scribe), and keep in mind I have large private projects (>200k) that are active as well. I've probably delivered 400k LoC in the last month with >60% coverage on large codebases, 99.99% AI generated.
What are you even saying? That was my whole point when you told me I need to be prepared to feel disconnected. You literally repeated what I said with a different narrative. Again your original statement is pure opinion and I don’t think anyone knows where we ultimately land at. For me, I don’t like Claude code or code for feature work I am actively working on.