I have seen fluctuations in token/sec. Early yesterday, roughly equivalent to none Codex GPT-5 (this branding ...), late yesterday I had a severe drop off in token/sec. Today, it seems to have improved again and with the lowered amount of unnecessary/rambling token output, GPT-5-Codex (Medium) seems faster overall. LLM rollouts always have this back and forth in token/sec, especially in the first few days.