I've been skeptical about LLMs being able to replace humans in their current state (which has gotten marginally better in the last 18 months), but let us not forget that GPT-3.5 (the first truly useful LLM) was only 3 years ago. We aren't even 10 years out from the initial papers about GPTs.

> was only 3 years ago

That's one way of looking at it.

Another way to look at it is GPT3.5 was $600,000,000,000 ago.

Today's AIs are better, but are they $600B better? Does it feel like that investment was sound? And if not, how much slower will future investments be?

Another way to look at $600B of improvement was whether or not they used the $600B to improve it.

This just smells like classic VC churn and burn. You are given it and have to spend it. And most of that money wasn't actually money, it was free infrastructure. Who knows the actual "cost" of the investments, but my uneducated brain (while trying to make a point) would say it is 20% of the stated value of the investments. And maybe GPT-5 + the other features OpenAI has enabled are $100B better.

> And most of that money wasn't actually money, it was free infrastructure.

But everyone who chipped in $$$ is counting against these top line figures, as stock prices are based on $$$ specifically.

> but my uneducated brain (while trying to make a point) would say it is 20% of the stated value of the investments

An 80% drop in valuations as people snap back to reality would be devastating to the market. But that's the implication of your line here.

And yet, we're clearly way into the period of diminishing returns.

I'm sure there's still some improvements that can be made to the current LLMs, but most of those improvements are not in making the models actually better at getting the things they generate right.

If we want more significant improvements in what generative AI can do, we're going to need new breakthroughs in theory or technique, and that's not going to come by simply iterating on the transformers paper or throwing more compute at it. Breakthroughs, almost by definition, aren't predictable, either in when or whether they will come.

Why are you assuming exponential or even linear growth/improvement?

E.g. OpenAI went from "AGI has been achieved internally" to lying with graphs (where they cut off graphs at 50% or 70% to present minor improvements as breakthroughs).

The growth can easily be logarithmic