This. I signed up for 5x max for a month to push it and instead it pushed back. I cancelled my subscription. It either half-assed the implementation or began parroting back “You’re right!” instead of doing what it’s asked to do. On one occasion it flat out said it couldn’t complete the task even though I had MCP and skills setup to help it, it still refused. Not a safety check but a “I’m unable to figure out what to do” kind of way.
Claude has no such limitations apart from their actual limits…
I have a funny/annoying thing with Claude Desktop where i ask it to write a summary of a spec discussion to a file and it goes ”I don’t have the tools to do that, I am Claude.ai, a web service” or something such. So now I start every session with ”You are Claude Desktop”. I would have thought it knew that. :)
I've had to tell it "yes you can" in response to it saying it can't do something, and then it's able to do the thing. What a weird future we live in!
Seems like the "geniuses" at Anthropic forgot to adapt the system prompt for the actual product
With one paragraph in your agents.md it's fixed, just admonish it to be proactive, decisive, and persistent.
If only…
I literally had to write a wake up routine.
https://github.com/gabereiser/morning-routine
It's always changing, but this is the start of my default prompt:
https://gist.github.com/natew/fce2b38216edfb509f7e2807dec1b6...
I've had 0 issues with Codex once it adopted it. I use it for Claude too, which seems to also improve its continuation.
It was revised for friendliness based on the Anthropic paper recently, I'd have been a lot less flowery otherwise.