Ah yes, the 0.50$/h support infrastructure from the places that cannot refuse the deal. "frontier" LLMs currently cosplay a dunk with google and late alzheimer's. Surely, they speed up brute-forcing correct answer a lot by trying more likely texts. And? This overfed markov chain doesn't need supporing infrastructure — it IS supporting infastructure, for the cognitive something that is not being worked on prominently, because all resources are needed to feed the markov chain.

The silence surrounding new LLM architectures is so loud that an abomination like "claw" gets prime airtime. Meanwhile models keep being released. Maybe the next one will be the lucky draw. It was pure luck, finding out how well LLMs scale, in the first place. Why shouldn't the rest of progress be luck driven too?

Kerbal AGI program...

Pretty much, it's just that these overfed Markov chains when given a proper harness and agentic framework are able to produce entire software projects in a fraction of the time it used to take.

Kerbel AGI program hits the nail on the head.

Sorry, I tought you meant "support infrastructure" in a much wider sense — yeah, LLMs are frighteningly good at lockpicking tests using source code shaped inputs. It's just that they are also frighteningly good at finding insane ways to game the tests, too. I wouldn't say that LLMs are very "G" in the AI they do — present them with confusing semantics, and they fall off the self-contradiction cliff. No capability of developing theory systematically from observations.