Do we really know that gpt-5-codex is a finetune of gpt-5(-thinking)? The article doesn't clearly say that, right?
I suspect that this is smaller than gpt-5 or at least a quantized version. Similar to what I suspect Opus 4.1 is. That would also explain why it's faster.
It's annoying to see a link to a Theo video -- same guy who went with Simon to OpenAI's GPT-5 glazefest and had to backpedal when everyone realized what a shill he is.
I know neither of them are journalists -- I'm probably expecting too much -- but Simon should know better.
This seems to me like a very harsh take on Theo’s motivations. I don’t know him beyond what I’ve learned from his videos, but given occams razor I’m inclined to believe him: gpt5 seemed much better during the private demo than the public release. There are many possible explanations but jumping to ‘shill’ (implying deception) seems uncalled for.
I did actually consider that quite a bit when I got invited to OpenAI's mysterious recorded launch event (they didn't tell us it was GPT-5 until we got there) - would it damage my credibility as an independent voice in the AI space?
I decided to risk it. Crucially OpenAI at no point asked for any influence over my content at all, aside from sticking to their embargo (which I've done with other companies before.)
> "We find that comments by GPT‑5-Codex are less likely to be incorrect or unimportant" -- less unimportant comments in code is definitely an improvement!
This seems to be a misunderstanding. In the original OpenAI article, comment here is about code review comment, not comment in code.
Do we really know that gpt-5-codex is a finetune of gpt-5(-thinking)? The article doesn't clearly say that, right?
I suspect that this is smaller than gpt-5 or at least a quantized version. Similar to what I suspect Opus 4.1 is. That would also explain why it's faster.
OpenAI say:
"Today, we’re releasing GPT‑5-Codex—a version of GPT‑5 further optimized for agentic coding in Codex."
So yeah, simplifying that to a "fine-tune" is likely incorrect. I just added a correction note about that to my article.
It's annoying to see a link to a Theo video -- same guy who went with Simon to OpenAI's GPT-5 glazefest and had to backpedal when everyone realized what a shill he is.
I know neither of them are journalists -- I'm probably expecting too much -- but Simon should know better.
Literally the only channel I've ever blocked on Youtube.
This seems to me like a very harsh take on Theo’s motivations. I don’t know him beyond what I’ve learned from his videos, but given occams razor I’m inclined to believe him: gpt5 seemed much better during the private demo than the public release. There are many possible explanations but jumping to ‘shill’ (implying deception) seems uncalled for.
While not a journalist, Simon definitely has a background in journalism.
He was one of the original authors of Django, back when it was a “web framework for journalists with deadlines”.
Exactly. That's why I said he should know better. He never should have gone to that event to hype GPT-5 under the guise of "testing" it out.
I did actually consider that quite a bit when I got invited to OpenAI's mysterious recorded launch event (they didn't tell us it was GPT-5 until we got there) - would it damage my credibility as an independent voice in the AI space?
I decided to risk it. Crucially OpenAI at no point asked for any influence over my content at all, aside from sticking to their embargo (which I've done with other companies before.)
> "We find that comments by GPT‑5-Codex are less likely to be incorrect or unimportant" -- less unimportant comments in code is definitely an improvement!
This seems to be a misunderstanding. In the original OpenAI article, comment here is about code review comment, not comment in code.
The pelican is not very good
But probably fast
Would be faster if it got on the bike
[dead]