> I'm either in a minority or a silent majority. Claude Code surpasses all my expectations.
I looked at some stats yesterday and was surprised to learn Cursor AI now writes 97% of my code at work. Mostly through cloud agents (watching it work is too distracting for me)
My approach is very simple: Just Talk To It
People way overthink this stuff. It works pretty good. Sharing .md files and hyperfocusing on various orchestrations and prompt hacks of the week feels as interesting as going deep on vim shortcuts and IDE skins.
Just ask for what you want, be clear, give good feedback. That’s it
Right - I have a ton of coworkers who obsess over "skills" and different ways to run agents and whatnot but I just... spend some time to give very thorough, detailed instructions and it just Does The Thing. I rarely fight with Claude Code these days.
We probably need something like the WET principle for skills. If you need to explain the same thing to an agent more than twice, turn it into a skill (or add it to AGENTS.md, or CLAUDE.md, or to you docs folder, or your guides folder, or whatever method you use). If you haven't needed to explain it more than twice, it's probably fine. The context pollution from the skill would likely be worse than not having the skill
Of course exceptions apply. Some basic information that will reliably be discovered is still worth adding to your AGENTS.md to cut down on token use. But after a couple obvious things you quickly get into the realm of premature optimization (unless you actually measure the effects)
Same here. For me, this means a spec doc split into features/UX, technical requirements, and language-specific requirements, iterated before the model touches code.
The trick is to "just use it", BUT every few weeks grab the logs (you do keep them, right?) and have a session with the model to find out if there are any repeated patterns.
If you find any, consider making them into skills or /commands or maybe even add them to AGENTS.md.
Which logs do you use for that?
I would assume those in ~/.claude/projects/**/*.jsonl. They contain full conversation history, including the tool calls that were made, how man tokens were consumed, etc
Claude has a built-in /insights feature for this, but you can replicate it with any other tool that keeps the session logs on disk.
I agree it works nicely for me. From my experience it’s not realistic to expect one-shot each time. But asking it to build chunks and entering a review cycle with nudging works well. Once I changed my mindset from it « didn’t do a one-shot so it’s crap » and took it as an iterative tool that build pieces that I assemble it’s been working nicely without external frameworks or anything. Plan-review, iterate, split, build, review iterate
You're wasting a ton of tokens doing that though. Right now you don't realize it because they're being heavily subsidized, but you will understand the point of have good orchestration and memory files when you will have to pay the real cost of your use.
> You're wasting a ton of tokens doing that though.
My time is worth more than tokens. I’m thinking of maybe creating some .md files to save me time in code review. If I do it right, it’s going to cost more in tokens because the robots will do more.
Cost cannot go up, only down with time (with occasional short term fluctuations). Competition, including open weight models and consumer hardware (ie upcoming M5 Ultra) keeps moving ceiling of what you can charge down.
If the cost is subsidized by another cash source (e.g. VC money) when the source stops prices can definitely go up.
Company pays for company’s tokens, so company’s problem, not mine. I am happy to skill up and avoid overusing tokens for my personal sub, but if it’s getting results then I couldn’t care less how much my employer has to pay for it. They’re begging me to use it in the first place anyway.
My experience as well on non trivial stuff for personal projects, just talk... It makes mistakes but considering the code I see in professionnal settings, I rather deal with an agent than third parties.
I love the IDE skins analogy. Very true.
Everyone knows that a red UI skin goes faster
How do you collect these stats?
Is it by characters human typed vs AI generated, or by commit or something?
> How do you collect these stats?
Cursor dashboard. I know they're incentivized to over-estimate but feels directionally accurate when I look at recent PRs.
Are you mostly using the Composer model?
> Are you mostly using the Composer model?
Don’t really think about it. I think when I talk to it through Slack, cursor users codex, in my ide looks like it’s whatever highest claude. In Github comments, who even knows
[dead]