People on here love to be contrarian about Cursor, but I’ve tried all the popular alternatives (Copilot, Claude Code, Codex, Gemini CLI, Cline) and found Cursor’s overall experience to just be unmatched. A big part of that is its speed, another its reliability.

It’s the only coding agent I’m actually really motivated to use out of the box because it really does make me feel more productive while the others keep messing up the project, from way too large changes I didn’t ask for all the way to constant syntax and request errors.

It’s the only coding agent I’ve used that feels serious about being a product rather than a prototype. Their effort in improving their stack is totally paying off.

I dropped cursor for the precise reason you mention: reliability.

Countless times my requests in the AI chat just hang there for 30+ seconds more until I can retry them.

When I decided to give Claude Code a try (I thought I didn't need it because I used Claude in Cursor) I couldn't believe how faster it was, and literally 100% reliable.

EDIT: given today's release, decided to give it a go. The Composer1 model _is_ fast, but right at the second new agent I started I got this:

> Connection failed. If the problem persists, please check your internet connection or VPN

Sounds like you have a network problem. Did you try checking the network diagnostic in settings? They default to http2 which can throw a wrench in some corporate networks.

I would be willing to bet money your issue is on your side. I am a daily user since the beginning and cannot recall when I have had issues like you describe unless it was related to my corp network.

A lot of progress is being made here on the Cursor side I encourage you to try it again.

(Cursor dev)

This is the exact reason I left Cursor for Claude Code. Night and day difference in reliability. The Windows experience might be especially bad, but it would get constantly hung or otherwise fail when trying to run commands. I also had to babysit Cursor and tell it to continue for mid sized tasks.

They've improved performance dramatically in the last few weeks, might have fixed your issues.

Its clear they've been shipping a lot of windows updates.

It does seem significantly better on Windows. I'll give it another chance over the next couple weeks.

I use cursor daily, my business partner uses CC. Without a doubt, CC is certainly better, I'm just not willing to let go of the flow I spent the last year fine tuning. I'll probably make the leap after we finish the latest release.

I used Cursor for the total of one day (paid for a year subscription), discovered Claude Code later that day and havent opened Cursor since.

Note, later I started using Codex and now Codex is my daily driver, Claude Code for problems where Codex fails (not many), and again Cursor is never used.

They were the first mover but Codex (in my opinion) blows Cursor up into 1000 tiny pieces. It's just so, so much better.

I too have tried them all and have settled with Cursor being the best. That said I see the current space split between folks like me who know generally what I want built and appreciate a tool that helps me get to goal quicker and on the otherwise of the spectrum, folks who want the tool to orchestrate most of the engineering. I have no opinion on which is better but for me I sit on the first camp. In that camp Cursor is by far the best tool.

Yep, it just works seamlessly. Sure, it hangs sometimes, but their UI allows you to retry or undo changes to an earlier point in the conversation easily. The autocompletion is nice as well and pretty satisfying to tab through the small and menial things when refactoring.

> I’ve tried all the popular alternatives (Copilot, Claude Code, Codex, Gemini CLI, Cline)

Can't help but notice you haven't tried Zed!

You tried Claude and still prefer cursor?

Absolutely. CC can be tuned to not do too much crap on its own, but even with the new extension its IDE integration and multi thread management are still significantly worse, as is its status reporting, which I find to be very important.

Also, somehow magically, I’ve found Cursor’s Auto mode to be significantly faster than the specific models I’ve tried, Claude being among them.

Auto is pretty amazing and I think most folks that have issues or complain about cost are simply not using Auto.

Auto is only good for trivial stuff at this point. It is quite subpar at everything else. Th is is probably because it almost always defaults to Claude Sonnet 3.5 (which you can tell if you ask the agent to identify itself and tell you its version), and that is pretty outdated.

Again it goes back to what your workflow is. I don’t think trivial is the right word. I use auto to write fairly advanced code but I do it in bite size chunks or relatively bite size. So thinking function level or a couple of interdependent functions ruins being written.

I would agree it is not as good on doing lengthy work where it’s taking design all the way through implementing a feature in a single shot but trivial is not a good description.

I also don’t think you’re right. 3.5 was recently deprecated and even before then, Cursor has been hitting rate limits with Anthropic. Auto is as much a token cost optimization as it is a rate limit optimization.

Auto had a big improvement a few weeks ago (around when pricing changed)

If a few weeks is months I would agree I think the change to Auto was 2-3+ months ago when they moved to charging named models and higher limits on Auto.

Absolutely. I actually don’t understand the preference folks have for Claude code. I don’t find it that powerful. That said, I think some of it comes down to preference and work context.