I suspect we've already reached the point with models at the GPT5 tier where the average person will no longer recognize improvements and this model can be slightly improved at slow intervals and indeed run for years. Meanwhile research grade models will still need to be trained at massive cost to improve performance on relatively short time scales.

Whenever someone has complained to me about issues they are having with ChatGPT on a particular question or type of question, the first thing I do is ask them what model they are using. So far, no one has ever known offhand what model they were using, nor were not aware there are more models!

If you understand there are multiple models from multiple providers, some of those models are better at certain things than others, and how you can get those models to complete your tasks, you are in the top 1% (probably less) of LLM users.

This would be helpful if there was some kind of first principle at which to gauge that better or worse comparison but there isn't outside of people's value judgements like what you're offering.

[deleted]

I may not qualify as an "average user" but I shudder imagining being stuck using a 1+ yr stale model for development given my experiences using a newer framework than what was available during training.

Passing in docs usually helps, but I've had some incredibly aggravating experiences where a model just absolutely cannot accept their "mental mode" is incorrect and that they need to forget the tens of thousands of lines of out of date example code they've ingested during training. IMO it's an under-discussed aspect of the current effectiveness of LLM development thanks to the training arms race.

I recently had to fight Gemini to accept that a library (a Google developed AI library for JS, somewhat ironically) had just released a major version update with a lot of API changes that invalidated 99% of the docs and example code online. And boy was there a lot of old code floating around thanks to the vast amounts of SEO blog spam for anything AI adjacent.

>Passing in docs usually helps, but I've had some incredibly aggravating experiences where a model just absolutely cannot accept their "mental mode" is incorrect and that they need to forget the tens of thousands of lines of out of date example code they've ingested during training. IMO it's an under-discussed aspect of the current effectiveness of LLM development thanks to the training arms race.

I think you overestimate the amount of code turnover in 6-12 months...

Strangely, I feel GPT-5 as the opposite of an improvement over the previous models, and consider just using Claude for actual work. Also the voice mode went from really useful to useless “Absolutely, I will keep it brief and give it to you directly. …some wrong annswer… And there you have it! As simple as that!”

>Strangely, I feel GPT-5 as the opposite of an improvement over the previous models

This is almost surely wrong but my point was about GPT5 level models in general not GPT5 specifically...

The "Pro" variant of GTP-5 is probably the best model around and most people are not even aware that it exists. One reason is that as models get more capable, they also get a lot more expensive to run so this "Pro" is only available at the $200/month pro plan.

At the same time, more capable models are also a lot more expensive to train.

The key point is that the relationship between all these magnitudes is not linear, so the economics of the whole thing start to look wobbly.

Soon we will probably arrive at a point where these huge training runs must stop, because the performance improvement does not match the huge cost increase, and because the resulting model would be so expensive to run that the market for it would be too small.

>Soon we will probably arrive at a point where these huge training runs must stop, because the performance improvement does not match the huge cost increase, and because the resulting model would be so expensive to run that the market for it would be too small.

I think we're a lot more likely to get to the limit of power and compute available for training a bigger model before we get to the point where improvement stops.