It’s splitting a hair, but a pretty important hair. Does anyone think that models won’t need continuous retraining? Does anyone think models won’t continue to try to scale? Personally, I think we’re reaching diminishing returns with scaling, which is probably good because we’ve basically run out of content to train on, and so perhaps that does stop or at least slow down drastically. But I don’t see a scenario where constant retraining isn’t the norm, even if the rough amount of content we’re using for it grows only slightly.

Well, models are definitely good enough for some things in their current state, without needing to be retrained (computer translation for example was a solved problem with GPT3)

That’s true but irrelevant. No AI company is stopping training and further model development. OpenAI didn’t stop with GPT3, and they won’t stop with GPT5. No company, AI company or not, stops innovating in their market segment. You need to keep innovating to stay competitive.