I think you're falling victim to survivorship bias there, or something like it.

In 1940 I might have said "fusion power is possible" based entirely on what advanced psychics knowledge I had. And I would have been correct, according to the laws of physics it is possible. We still don't have it though. When watching Neil Armstrong walk on the moon I might have said "moon colonies are possible", and I'd have been right there too. And yet...

Those two things are prevented by economics more than physics.

For AI in particular, the economics currently favor ongoing capability R&D - and even if they didn't favor AI R&D directly (i.e. if ChatGPT and Stable Diffusion never happened), they would still favor making the computational inputs of AI R&D cheaper over time.

Building advanced AIs is becoming easier and cheaper. It's just that the bar of "good enough" has gone off to space, and a "good enough" from 2020 is, nowadays, profoundly unimpressive.

I'm not sure how much does it take to reach AGI. No one is sure of it. But the path there is getting shorter over time, clearly. And LLMs existing, improving and doing what they do makes me assume shorter AGI timelines, and call for a vote of no confidence on human exceptionalism.

> But the path there is getting shorter over time, clearly.

Why do you assume there is no hard limit we’ll hit with the current tech that prevents us from reaching AGI?

The first reason is that LLM advancements are yet to stop. And the second reason is that "the current tech" is a moving target.

If, let's say, in 2029, it turns out that autoregressive transformer LLMs have exhausted their potential, the R&D that goes into improving them now would be put into finding alternatives. And I struggle to imagine not finding any.