> each model is roughly 2x profitable on its own, but each next model costs 10x the last. The whole thing only works if scaling keeps delivering.
This is a decent argument, but it's not the death knell you think.
Models are getting 99% more efficient every 3 years - to get the same amount of output, combined with hardware and (mostly) software upgrades - you can use 99% less power.
The number of applications where AI is already "good enough" keeps growing every day. If the cost goes down 99% every three years, it doesn't take long until you can make a ton of money on those applications.
If AI stopped progressing today, it would take probably a decade or longer for us to take full advantage of it. So there is tons of forward looking revenue that isn't counted yet.
For the foreseeable future, there are MANY MANY uses of models where a company would not want to host its own models and would be GLAD to pay an 4-5x cost for someone else to host the model and hardware for them.
I'm as bullish on OpenAI being "worth" $730B as I was on Snap being worth what it IPO'd for - which it's still down about 80% (AFTER inflation, or about ~95% adjusting for gold inflation).
But guess what - these are MINIMUM valuations based on 50-80% margins - i.e. they're really getting about ~$30B - the rest is market value of hardware and hosting. OpenAI could be worth 80% less, and they could still make a metric fuck-ton of money selling at IPO with a $1T+ market cap to speculative morons easily...
Realistically, very rich people with high risk tolerance are saying that they think OpenAI has a MINIMUM value of ~$100B. That seems very reasonable given the risk tolerance and wealth.
When models get cheaper to run for OpenAI, they also get cheaper for everyone else. It gets commoditized. AI might be able to do more, but most people aren’t going to pay for a thing they could get for free. See the many models on Huggingface as examples of that.
And as the number of things AI is “good enough” at increases, the list of things on the frontier that people will want to pay OpenAI for shrinks. Even if OpenAI can consistently churn out PhD level math, most companies don’t care about that.
So a necessary (but not sufficient) condition for the math to work out is that frontier tasks still exist and are profitable. This is why CEOs keep hyping up AGI. But what they really want is for developers to keep paying to get AI to center a div.
I love that you are already confident fitting a curve. I want some of that swagger in my life.
I was thinking the same thing.
> Models are getting 99% more efficient every 3 years - to get the same amount of output, combined with hardware and (mostly) software upgrades - you can use 99% less power.
Even if true, this still doesn't bend the curve when paying for the next model.
> If AI stopped progressing today, it would take probably a decade or longer for us to take full advantage of it. So there is tons of forward looking revenue that isn't counted yet.
If this is true, it's true for the technology overall, and not necessarily OpenAI since inference would get commoditized quickly at that point. OpenAI could continue to have a capital advantage as a public stock, but I don't think it would if the music stopped.
I would actually like to see the real math currently.
The market adoption has increased a lot. The cost to serve has come down a lot per token.
Model sizes have not increased exponentially recently (The high point being the aborted GPT-4.5), most refinement recently seems to be extending training on relatively smaller models.
When you take this into account together, the relative training to inference income/cost ratio likely has actually changed dramatically.
> 99% more efficient every 3 years
It's 2x efficiency. Then I'd take 50% less power instead of ridiculous 99% less power.
GPT-4 came out 3 years ago and you can run comparable models for 1% of the cost nowadays. That is not 2x efficiency. That's two orders of magnitude in end-to-end compute efficiency.
you're looking at nearly the entire curve of the tech's development. that's like saying lightbulbs became 99% more energy efficient and therefore will become another 99% more energy efficient. but most techs follow an S curve.
But S curves are boring and dont moon
How do we know how much it costs? Or is this just based off the token pricing?
"If AI stopped progressing today, it would take probably a decade or longer for us to take full advantage of it."
AI stopped progressing, or LLMs? I really dislike people throwing the term AI around.
For the purposes of their argument, I don’t think the distinction matters.
> Models are getting 99% more efficient every 3 years
How many years total are you basing this on?
> Models are getting 99% more efficient every 3 years
The LLM industry has only be around for like 4 years. Extrapolating trends from that is pretty naive.
ok, but everything depends on your numbers being correct. 99% improved efficiency seems kind of a way too optimistic prediction.
We said all the same shit about VR, dude. Even had a global pandemic show up to boost everyone's interest in the key market of telepresence. Turns out the merry go round can stop abruptly.
No. Like many of us, I never saw much value in VR. LLMs have undeniable value that is general and broad. Now, does that mean OpenAI has a moat? No, it does not.
Did we?! You and Mark Zuckerberg maybe.
"Am I nothing to you?" --Tim Cook