>In a short period of time, AI would be so far ahead of us and our existing ideas, that the world would become unrecognizable.
>That's not what's happening here ...
On the contrary, it very much is.
I'd argue AGI is already achieved via LLMs today, provided they've excellent external cognitive infrastructure supporting.
However, the gap from AGI to ASI is perhaps longer than anticipated such that we're not seeing a hard takeoff immediately after arriving at the first.
Just, you know—potential mass unemployment on a scale never seen before. When you frame it that way, whether LLMs qualify as AGI is largely semantics.
That said, I really hope you're right and I'm wrong.
Ah yes, the 0.50$/h support infrastructure from the places that cannot refuse the deal. "frontier" LLMs currently cosplay a dunk with google and late alzheimer's. Surely, they speed up brute-forcing correct answer a lot by trying more likely texts. And? This overfed markov chain doesn't need supporing infrastructure — it IS supporting infastructure, for the cognitive something that is not being worked on prominently, because all resources are needed to feed the markov chain.
The silence surrounding new LLM architectures is so loud that an abomination like "claw" gets prime airtime. Meanwhile models keep being released. Maybe the next one will be the lucky draw. It was pure luck, finding out how well LLMs scale, in the first place. Why shouldn't the rest of progress be luck driven too?
Kerbal AGI program...
Pretty much, it's just that these overfed Markov chains when given a proper harness and agentic framework are able to produce entire software projects in a fraction of the time it used to take.
Kerbel AGI program hits the nail on the head.
Sorry, I tought you meant "support infrastructure" in a much wider sense — yeah, LLMs are frighteningly good at lockpicking tests using source code shaped inputs. It's just that they are also frighteningly good at finding insane ways to game the tests, too. I wouldn't say that LLMs are very "G" in the AI they do — present them with confusing semantics, and they fall off the self-contradiction cliff. No capability of developing theory systematically from observations.