Claude Code has been bleh or meh at best in my experience. There's so many posts on HN fawning about it lately that it could only be a guerrilla marketing campaign.

[deleted]

You still need to give it precise context and instructions when dealing with things that are not web apps or some other software cliche.

The reasoning is great in opus, unbeatable at the moment.

I understand what you mean, it becomes disappointing on more niche or specific work. It’s honestly a good thing to see these models are not really intelligent yet.

I still don't trust any AI enough to generate or edit code, except for some throwaway experiments, because every time I tried it's been inefficient or too verbose or just plain wrong.

I use it for reviewing existing code, specifically for a components-based framework for Godot/GDScript at [0]. You can view the AGENTS.md and see that it's a relatively simple enough project: Just for 2D games and fairly modular so the AI can look at each file/class individually and have to cross-reference maybe 1-3 dependencies/dependents at most at any time during a single pass.

I've been using Codex, and it's helped me catch a lot of bugs that would have taken a long time on my own to even notice at all. Most of my productivity and the commits from the past couple months are thanks to that.

Claude on the other hand, oh man… It just wastes my time. It's had way more gaffes than Codex, on the exact same code and prompts.

[0] https://github.com/InvadingOctopus/comedot

I had a similar experience and the answer appears to be learning how to use a specific model for a specific task using a specific harness (model X task X harness). Another, and somewhat related, lesson learned is understanding how to work with a given model and not against it.

I still get really mad at AI sometimes and I am not sure whether I could use AI for coding full time.

(Codex broke my git a few days ago.)

"I don't get it. Everyone else is wrong."

"There's no such thing as astroturfing." ok

I use Codex regularly and Claude is shit in comparison, from its constant "Oops you're right!!" backtracking to its crap Electron app (if their AI is so good why can't they make a fucking native app for each OS?)

Hell right freakin now I asked it to implement something and got a weird "Something went wrong" API error

"Shit", "Crap", "Fucking", "Hell", "Freaking".

Maybe you're too easily frustrated. Or your existing code reads like your comments.

Maybe you haven't tried any other AI product with an actual preexisting project. Or blindly trust every BS Claude feeds you.

I haven't had any such frustrations with Codex

Claude is specially annoying because of their submarining and people thinking it's the best

I use both, read what I need to read and fix small issues myself. Both Agents are pure magic and none of their issues warrant a tantrum on a public forum.

I posted a more detailed report in case you can't see it in your thread view: https://news.ycombinator.com/item?id=47541369

and other comments further back in my history

> none of their issues warrant a tantrum on a public forum

I don't get frustrated if a problem is genuinely difficult to solve and the product creator is trying their best,

I get frustrated when a problem has been solved by other similar products but a specific creator or provider refuses to follow suit and fix their shit.

Claude's Electron app vs. Codex's native app is one such example right off the first impression of both products.

Codex desktop is Electron too. What app are you talking about?