I have seen fluctuations in token/sec. Early yesterday, roughly equivalent to none Codex GPT-5 (this branding ...), late yesterday I had a severe drop off in token/sec. Today, it seems to have improved again and with the lowered amount of unnecessary/rambling token output, GPT-5-Codex (Medium) seems faster overall. LLM rollouts always have this back and forth in token/sec, especially in the first few days.
I have seen fluctuations in token/sec. Early yesterday, roughly equivalent to none Codex GPT-5 (this branding ...), late yesterday I had a severe drop off in token/sec. Today, it seems to have improved again and with the lowered amount of unnecessary/rambling token output, GPT-5-Codex (Medium) seems faster overall. LLM rollouts always have this back and forth in token/sec, especially in the first few days.
Extremely slow for me - takes minutes to get anything done. Regular GPT5 was much faster. Hoping it’s mostly due to the launch day.
I've been using gpt-5 on effort=high but for gpt-5-codex, try: `-c model_reasoning_effort=medium`.
On high it is totally unusable.
even on medium ... gpt-5 was way faster, at least that's my first impression