So let me get this straight, OpenAi previously had an issue with LOTS of different models snd versions being available. Then they solved this by introducing GPT-5 which was more like a router that put all these models under the hood so you only had to prompt to GPT-5, and it would route to the best suitable model. This worked great I assume and made the ui for the user comprehensible. But now, they are starting to introduce more of different models again?

We got:

- GPT-5.1

- GPT-5.2 Thinking

- GPT-5.3 (codex)

- GPT-5.3 Instant

- GPT-5.4 Thinking

- GPT-5.4 Pro

Who’s to blame for this ridiculous path they are taking? I’m so glad I am not a Chat user, because this adds so much unnecessary cognitive load.

The good news here is the support for 1M context window, finally it has caught up to Gemini.

Well, they have older ones of course. But the current options actual users see is "Auto" or "Instant (5.3)" or "Thinking (5.4)". Not that complicated really.

i guess you still have the "auto" as an option to route your request

I much prefer this, we can choose based on our use-cases, and people who don’t care can still use Auto.

5 itself might have solved the problem of having too many different models somewhere in the backend