"Heavy readers - applications that consume massive amounts of context but generate minimal output - operate in an almost free tier for compute costs."
Not saying there's not interesting analysis here, but this is assuming that they don't have to pay for access to the massive amounts of context. Sources like stackoverflow and reddit that used to be free, are not going to be available to keep the model up to date.
If this analysis is meant to say "they're not going to turn the lights out because of the costs of running", that may be so, but if they cannot afford to keep training new models every so often they will become less relevant over timte, and I don't know if they will get an ocean of VC money to do it all again (at higher cost than last time, because the sources want their cut now).