So as I said, a self interested metric who also controls how many tokens it takes to get a desirable result from their models.
So as I said, a self interested metric who also controls how many tokens it takes to get a desirable result from their models.
Users are willingly paying for larger volumes of tokens. You are layering your own unproven interpretation onto that. I would have arrived at an opposite interpretation given the available facts. Models are becoming more token efficient for the same task, such as ChatGPT 5.3 versus 5.2 which halved the token count, and capabilities show a log relationship with the number of tokens since o1 preview was revealed in September 2024.
No, you have gone off in your own tangent. The person you're responding to is talking about money and my point is that you're using a misleading metric. Even if the current user base is paying more for the "exponential token usage", it does not add up to the industry's cost of maintaining and building on this technology, especially since we are not taking into account what that token usage costs the provider. First you said Anthropic as your source, but now you're talking about OpenAI's ChatGPT, who are floundering for a product and user base, which they themselves claim will be profitable through subscriptions at numbers never seen before in a subscription business model.