I have found Cursor to be frustrating and exhausting to work with, even with my rules file. When it works, it’s like magic. But most of the time, it feels like working with a Jr. dev who has a bit of a concussion. Code review is wearying work, and using Cursor means you’re doing a lot of code review. I have never once gotten into a flow state with it.
That was a long preamble to this question: any senior devs out there (20+ years) who enjoy using Cursor? What’s the trick?
If I may refer to a Zen koan, "your teacup is full."
I started programming professionally around java 1.3, and the greybeards around that time were talking about how they had moved to OOP. Most disliked it but the people who didn't adapt got stuck in dead ends. (Stay in a dead end long enough and it becomes lucrative - see Oracle DBAs and COBOL developers - if you can stay there long enough!)
You absolutely have to treat coding with LLMs as a new skill. It's just like learning a new editor to the same precision that you know Emacs or Vim. It's learning the rough edges - and the rough edges keep changing. As a senior, it's frustrating! However, as I learn more, I learn the necessary concepts to utilize what is good and try the frustrating part in a month or two when it has gotten better and so have I.
I've spent a career reading others' code, so that doesn't bother me.
Now, I prompt through TDD for code that matters and that has helped stop many of the problems I see others face. It's a little slower as an iteration loop but it generates the test data well, it allows me to make sure I understand the problem and the prompt, and it forces the changes to be small, which increases success.
If I have to make a non-trivial change to the code, I know it will try to rewrite it if I don't start a new agent session. So, I'm liberal with creating new sessions.
It is much better at small problems than large problems, which makes me cognizant to keep my problems small. Not only is that good for the LLM, it's good for the codebase.
In some cases, it's better to have the LLM create a script to make the change than try and have it go through the code with a prompt. It's very good at simple scripts.
And I'm not afraid to discard it when it's not valuable at the moment.
I find Cursor (or any LLM in general) to be amazing at doing very simple and concrete tasks for me. It really saves me a lot of time.
But the keyword here is "simple and concrete". I do not understand why people expect those tools to be good at tasks that involve having years of context on something. It really makes you think that the people that say that AI will replace SWEs have never actually used any of these tools.
Yes. I've loved not having to do some of the boring busywork tasks of coding. Basic CRUD pages, simple frontend stuff, general glue between existing services.
The trick I’ve been using is to copy the entire codebase into a text prompt with Repo Prompt and feed that into Grok with a specific request on what feature / change I want.
Then paste that output into Cursor with Claude 3.7 and have it make the actual code changes and ask it to build/fix errors along the way with yolo mode enabled.
The 2-step process is a lot better since Grok can refer to the entire context of your codebase in one shot and come up with a high quality implementation plan, which is then handed off to Cursor to autonomously make the code changes.
grok-3 beta in cursor has been good
Or Gemini Pro
I don’t know if I’m doing something wrong, but Gemini 2.5 pro was substantially worse coding quality than Grok. Which is surprising since I’m working on a Golang codebase which I had assumed Gemini would excel at given that it’s made by Google
Grok is quickly becoming my favorite model just because it’s so verbose, but at the same time low on BS.
Yes, Grok has become my go to model for general research and targeted coding tasks. Feels like its getting better over time vs ChatGPT which seemed to deteriorate over time.
Claude 3.7 is excellent, and better at coding but I appreciate the context size of Grok and feel like I get better bang for my buck for general purpose research too.
Yeah. I’m finding Gemini to be very verbose. Getting better results with Claude Sonnet. But Gemini has the larger context window.
I've been coding for 20+ years but I'm not sure that I'm a senior dev necessarily. That said, I use Cursor all the time and have had a lot of success.
Mostly you have to guide it a lot in verbose, planning methodology, often chained via other LLMs. It is like working with a junior developer - no doubt about that. But that's still really good for $20/month.
Coding is not my full time job and it really hasn't been more than 20 hrs/week since my time in FAANG; I do a lot of statistical work, IT, and other stuff too. Maybe that's why I like the mercurial help of Cursor since I am not using it 50 hours/week.
This was my experience, until the latest update. Suddenly cursor is useless. The agent option? Terrible. What’s manual, what’s ask? Just give me what i had before…such a step backwards.
The UI changed in the latest update but it’s not that hard.
Ask: previously was chat, and just tries to answer questions. Does not have the capability of editing your code directly (although if it provides a snippet, you can always click to apply it to the code).
Manual: previously was composer in standard mode. Can edit code across multiple files, but only works one prompt at a time. So if you ask it to edit tests, it will do that and then wait for your next input.
Agent: previously composer with agent mode enabled. Same as manual, but can figure out next steps and automatically execute them. For example, it can edit the tests, then run the CLI command to run the tests, then edit the code again if there are test failures, and repeat.
I find agent to be most helpful when you know the end goal but you need to be clear about what you want. Tell it things like “run the tests to make sure they’re working” and “search the codebase for where this class is used”.
I find manual best for when you know what small steps to do. Like, “create a helper class for managing permissions”, followed by “write tests for the profile view that checks permissions”, followed by “refactor the profile view to use the helper class”.
Which one of those options was the default for command+L? I also find it’s always auto applying changes despite those options being off…just seems a lot less smart suddenly.
I've really only had two problems:
- Gemini: refuses to work 20-30% of the time - Sonnnet: does 50% too much work
Pretty average dev here. It's not my profession but I do use it to make some of my living, whatever sense that makes.
Cursor is magic. Two massive use-cases for me:
1. Autocompletion of a copied function.
So I have some function, modX(paramX), and I need to duplicate it to modY(paramY). They're near enough the same. I love that I can copy/paste it, rename a variable or two, and Cursor then intuits what I need. Tab tab tabbity-tab later, job done. This might not be the most amazing use of AI but is sure as shit helps my RSI.
(I know I should abstract my functions yada yada.)
2. Generation of a new function where I have no idea how to start.
I tell the prompt what I need. "I'll give you a bunch of Markdown files and I'd like you to take the frontmatter properties and put them on an object then take each of the headers and its following content and put that in to this property on the object".
It'll do that, to 90%. I'll fix the 10%, making sure that I understand what it's created for me.
This would have taken me 4 hours. With Cursor it takes 15 minutes.
Yup 30+ year dev here. Tried Cursor for a bit and honestly most of the tokens burned were fighting with stupid code generation, and having it confidently push on. Just cancelled my sub today. Been working with Windsurf the last few weeks and it feels a little more controllable. However, what I tend to do is work on conceptual design, brainstorming etc with ChatGPT (o1/o3-mini-high) or Claude, then when I converge I bring the task to Windsurf for its slightly better view of the code and ability to execute across many files.
Yes, I previously used Cursor to build my SaaS, but now I need to refactor because the codebase has become unmodifiable. With AI coding tools, you must describe your problem extremely precisely—otherwise, things quickly turn into a mess.
When using low traction languages and claiming LLMs are a boon, I often wonder if one doesn't end up with an even lower traction language by trying to figure out how to phrase things in English.
In my experience, the solution to low traction languages and frameworks with lots of boilerplate and busy work coding is to use higher traction languages. I much prefer to grapple the problem with my brain in a tighter way than to attempt shaping a loose language with a looser one...
Honestly I’m having more fun with it than I expected - I kind of enjoy mentoring though, I like sort of feeding a newbie ideas and questions that leads them to make their own breakthrough, and I feel like Cursor delivers that same feeling sometimes.
Other times, I’m just jumping into a quick feature of bugfix ticket that’s nothing particularly fun or interesting - so it’s more fun to basically roleplay overseeing a junior dev, without any of the social pressure of interacting with a real person, and not have to get my hands dirty with boring tasks.
It’s all about finding the opportunity to have fun with it.
I feel like a lot of time is being wasted debating how serious and how real it is - but are you having any fun with it?? Cause I kinda am.