They lost me at Opus 4.7

Anecdotally OpenAI is trying to get into our enterprise tooth and nail, and have offered unlimited tokens until summer.

Gave GPT5.4 a try because of this and honestly I don’t know if we are getting some extra treatment, but running it at extra high effort the last 30 days I’ve barely see it make any mistakes.

At some points even the reasoning traces brought a smile to my face as it preemptively followed things that I had forgotten to instruct it about but were critical to get a specific part of our data integrity 100% correct.

Opus 4.7 via code has been inconsistent for me. Sometimes, it feels like working with a brilliant collaborator and is as good as 4.5 and 4.6 were. Other times, it takes dumb and lazy short cuts. It can be quite frustrating. Its response when I tell it it did something wrong is often to write a memory... which is then does not always read. The inconsistency isn't due to session length or age either. These are all new sessions. I feel like sometimes, I get routed do a dumber model or some other hidden setting is applied.

My experience as well. This is even worse than just having a mediocre model, because I can work around that. The inconsistency means it produces different outputs for the same prompt, and I can't rely on that as a business tool.

Same here. I feel like all of these shenanigans could be because Anthropic are compute constrained, forcing then to take reckless risks around reducing it.

I started using Claude heavily on the 20th after having not used it for a year. Largely Sonnet 4.6, web, cowork and code. Can confidently say it is significantly worse than this time a year ago and regret that my new employer requires we use it, and only it.

Same here. I was a fervent Claude code user at $200/mo until Opus4.7.

Freezing your IDE version is now a thing of the past, the new reality is that we can't expect agentic dev workflows to be consistent and I see too many people (including myself) getting burned by going the single-provider route.

On one hand I’m glad to finally see anthropic communicate on this but at this point all I have to say is… time to diversify?

They lost me a little before then - Claude Code's regressions were so very obvious and there's no sign they've learned their lesson in this article or in the comments of those who work on Claude Code on HN. They'll continue to tweak and generally mess around with a product people are using, altering the behaviour without notice in ways that can severely impact use, for months! GPT5.4 has been remarkably consistent and capable, as a replacement. I've cancelled my max plan.

GPT-5.4 was already better than Opus 4.6 on a lot of areas, especially correctness and tricky logic. I’m eager to see if 5.5 is even better.

I’ve never been one to complain about new models, and also didn’t experience most of the issues folks were citing about Claude Code over the last couple months. I’ve been using it since release, happy with almost each new update.

Until Opus 4.7 - this is the first time I rolled back to a previous model.

Personality-wise it’s the worst of AI, “it’s not x, it’s y”, strong short sentences, in general a bulshitty vibe, also gaslighting me that it fixed something even though it didn’t actually check.

I’m not sure what’s up, maybe it’s tuned for harnesses like Claude Design (which is great btw) where there’s an independent judge to check it, but for now, Opus 4.6 it is.

I noticed the difference, but coming from Gemini and xAI models it wasn’t that glaring. I still find that Opus makes much better plans than anything else I’ve tried, and it’s been very good at catching my mistakes in using public-key cryptography, also finding out why my crsqlite queries were failing despite no official documentation on the topic.

I’d never use such an expensive model for coding, so that might explain why I have little to complain about.

extra high burns tokens i find. ( run 5.4 on medium for 90% of the tasks and high if i see medium struggling and its very focused and make minimum changes.

Yeah but it also then strikes the perfect balance between being meticulous and pragmatic. Also it pushes back much more often than other models in that mode.

Note mini-high is similar perf/latency to medium, but much cheaper

Rework burns tokens.

Not a problem if they're offering unlimited, lol

I went back to 4.5. No regrets and it’s a bit cheaper.

Same here. 4.6 was a downgrade in thinking quality, but I appreciated the extend context at first.

Over time, I realized the extended context became randomly unreliable. That was worse to me than having to compact and know where I was picking up.

What's your workflow like? I'd be curious to test OpenAI out again but Claude Code is how I use the models. Does it require relearning another workflow?

Isn’t it bascially the same thing? You type what you want into the input box and it does what you ask for.

I guess I'm asking if their CLI tool is the same or if it functions different. I've never used anything besides CC so I wouldn't know if it's basically the same thing

Claude code can be configured with custom /slash commands and other details that don't necessarily transfer over to codex. /remote-control in cc is really great for walking away from my computer and continuing from my phone, for instance.

[deleted]

I find that it is better at thinking broadly and at a high level, on tasks that are tangential to coding like UX flows, product management and planning of complex implementations. I have yet to see it perform better than either Opus 4.6 or 4.7 though.

Truth