When I say 10 times cheaper, I mean when comparing models of the same capabilities. The kind of performance you get now for a 200$ subscription, a year ago probably would have costed 2000$.

I don’t believe that current models are 1000x better than the initial ChatGPT release. What metric are you using?

You don't? Now I use Gemini to code and optimize CUDA kernels. When I first used GPT3 in the OpenAI playground I was extremely impressed when I managed to get it to output a hello world program in C.

I understand what you're saying. However I'm not sure it's that germane when we're talking about whether or not the current $200 subscription fee is actually delivering value for money, or whether AI giants are manipulating performance to gain marketing points.

I assume the original reply was addressing the “never” in this specific point:

“The fact is most ordinary mortals never get access to a fraction of that kind of power”

Since previous generations of models get aggressively retired the cost reduction essentially never gets passed down to the customer.

A certain amount of input and output tokens doesn't cost 10x less than before.