>Soon we will probably arrive at a point where these huge training runs must stop, because the performance improvement does not match the huge cost increase, and because the resulting model would be so expensive to run that the market for it would be too small.
I think we're a lot more likely to get to the limit of power and compute available for training a bigger model before we get to the point where improvement stops.