These releases are lacking something. Yes, they optimised for benchmarks, but it’s just not all that impressive anymore. It is time for a product, not for a marginally improved model.
These releases are lacking something. Yes, they optimised for benchmarks, but it’s just not all that impressive anymore. It is time for a product, not for a marginally improved model.
The model was released less than an hour ago, and somehow you've been able to form such a strong opinion about it. Impressive!
It's more hedonic adaptation, people just aren't as impressed by incremental changes anymore over big leaps. It's the same as another thread yesterday where someone said the new MacBook with the latest processor doesn't excite them anymore, and it's because for most people, most models are good enough and now it's all about applications.
https://news.ycombinator.com/item?id=47232453#47232735
Plus people just really like to whine on the internet
Oh, come on, if it can't run local models that compete with proprietary ones it's not good enough yet!
Qwen 3.5 small models are actually very impressive and do beat out larger proprietary models.
GP said "It is time for a product, not for a marginally improved model."
ChatGPT is still just that: Chat.
Meanwhile, Anthropic offers a desktop app with plugins that easily extend the data Claude has access to. Connect it to Confluence, Jira, and Outlook, and it'll tell you what your top priorities are for the day, or write a Powerpoint. Add Github and it can reason about your code and create a design document on Confluence.
OpenAI doesn't have a product the way Anthropic does. ChatGPT might have a great model, but it's not nearly as useful.
I am actually super impressed with Codex-5.3 extra high reasoning. Its a drop in replacement (infact better than Claude Opus 4.6. lately claude being super verbose going in circles in getting things resolved). I stopped using claude mostly and having a blast with Codex 5.3. looking forward to 5.4 in codex.
I still love Opus but it's just too expensive / eats usage limits.
I've found that 5.3-Codex is mostly Opus quality but cheaper for daily use.
Curious to see if 5.4 will be worth somewhat higher costs, or if I'll stick to 5.3-Codex for the same reasons.
Same, it also helps that it's way cheaper than Opus in VSCode Copilot, where OpenAI models are counted as 1x requests while Opus is 3x, for similar performance (no doubt Microsoft is subsidizing OpenAI models due to their partnership).
I've been using both Opus 4.6 and Codex 5.3 in VSCode's Copilot and while Opus is indeed 3x and Codex is 1x, that doesn't seem to matter as Opus is willing to go work in the background for like an hour for 3 credits, whereas Codex asks you whether to continue every few lines of code it changes, quickly eating way more credits than Opus. In fact Opus in Copilot is probably underpriced, as it can definitely work for an hour with just those 12 cents of cost. Which I'm not sure you get anywhere else at such a low price.
Update: I don't know why I can't reply to your reply, so I'll just update this. I have tried many times to give it a big todo list and told it to do it all. But I've never gotten it to actually work on it all and instead after the first task is complete it always asks if it should move onto the next task. In fact, I always tell it not to ask me and yet it still does. So unless I need to do very specific prompt engineering, that does not seem to work for me.
That shouldn't really make a difference because you can just prompt Codex to behave the same way, having it load a big list of todo items perhaps from a markdown file and asking it to iterate until it's finished without asking for confirmation, and that'll still cost 1x over Opus' 3x.
I struggle to believe this. Codex can’t hold a candle to Claude on any task I’ve given it.
One opinion you can form in under an hour is... why are they using GPT-4o to rate the bias of new models?
> assess harmful stereotypes by grading differences in how a model responds
> Responses are rated for harmful differences in stereotypes using GPT-4o, whose ratings were shown to be consistent with human ratings
Are we seriously using old models to rate new models?
If you're benchmarking something, old & well-characterized / understood often beats new & un-characterized.
Sure, there may be shortcomings, but they're well understood. The closer you get to the cutting edge, the less characterization data you get to rely on. You need to be able to trust & understand your measurement tool for the results to be meaningful.
Why not? If they’ve shown that 4o is calibrated to human responses, and they haven’t shown that yet for 5.4…
Benchmarks?
I don't use OpenAI nor even LLMs (despite having tried https://fabien.benetou.fr/Content/SelfHostingArtificialIntel... a lot of models) but I imagine if I did I would keep failed prompts (can just be a basic "last prompt failed" then export) then whenever a new model comes around I'd throw at 5 it random of MY fails (not benchmarks from others, those will come too anyway) and see if it's better, same, worst, for My use cases in minutes.
If it's "better" (whatever my criteria might be) I'd also throw back some of my useful prompts to avoid regression.
Really doesn't seem complicated nor taking much time to forge a realistic opinion.
The models are so good that incremental improvements are not super impressive. We literally would benefit more from maybe sending 50% of model spending into spending on implementation into the services and industrial economy. We literally are lagging in implementation, specialised tools, and hooks so we can connect everything to agents. I think.
Plasma physicist here, I haven't tried 5.4 yet, but in general I am very impressed with the recent upgrades that started arriving in the fall of 2025: for tasks like manipulating analytic systems of equations, quickly developing new features for simulation codes, and interpreting and designing experiments (with pictures) they have become much stronger. I've been asking questions and probing them for several years now out of curiosity, and they suddenly have developed deep understanding (Gemini 2.5 <<< Gemini 3.1) and become very useful. I totally get the current SV vibes, and am becoming a lot more ambitious in my future plans.
Youre just chatting yourself out of a job.
If we don't need plasma physicists anymore then we probably have fusion reactors or something, which seems like a fine trade. (In reality we're going to want humans in the loop for for the forseeable future)
Giving the right answer: $1
Asking the right question: $9,999
The products are the harnesses, and IMO that’s where the innovation happens. We’ve gotten better at helping get good, verifiable work from dumb LLMs
They don't need to be impressive to be worthwhile. I like incremental improvements, they make a difference in the day to day work I do writing software with these.
The product is putting the skills / harness behind the api instead of the agent locally on your computer and iterating on that between model updates. Close off the garden.
Not that I want it, just where I imagine it going.
They have a product now. Mass surveillance and fully automated killing machines.
5.3 codex was a huge leap over 5.2 for agentic work in practice. have you been using both of those or paying attention more to benchmark news and chatgpt experience?
That's for you to build; they provide the brains. Do you really want one company to build everything? There wouldn't be a software industry to speak of if that happened.
Nah, the second you finish your build they release their version and then it's game over.
Well they are currently the ones valued at a number with a whole lotta 0s on it. I think they should probably do both
The scores increase and as new versions are released they feel more and more dumbed down.
When did they stop putting competitor models on the comparison table btw? And yeh I mean the benchmark improvements are meh. Context Window and lack of real memory is still an issue.
They need something that POPS:
[dead]