“Web” is such a broad category. Quite a leap from LLM wrappers.

Well, LLMs are themselves very broad. They encompass everything from web search to everything that you could automate yourself but don't have the time.

I don't LLM capacities have to reach human-equivalent for their uses to multiply for years to come.

I don't LLM technology as it exists can reach AGI by the simple addition of more compute power and moreover, I don't think adding computer necessarily is going to provide proportionate benefit (indeed, someone pointed-out that the current talent race acknowledges that brute-force has likely had it's day and some other "magic" is needed. Unlike brute-force, technical advances can't be summoned at will).

"Brute force" is only held back by economics and hardware limitations.

There are still massive gains to be had from scaling up - but frontier training runs have converged on "about the largest model that we can fit into our existing hardware for training and inference". Going bigger than that comes with non-linear cost increases. The next generations of AI hardware are expected to push that envelope.

The reason why major AI companies prioritize things like reasoning modes and RLVR over scaling the base models up is that reasoning and RLVR give real world performance gains cheaper and faster. Once scaling up becomes cheaper, or once the gains you can squeeze out of RLVR deplete, they'll get back to scaling up once again.

> Well, LLMs are themselves very broad.

I think overstating their broad-ness is core to the hype-cycle going on. Everyone wants to believe—or wants a buyer to believe—that a machine which can grow documents about X is just as good (and reliable) as actually creating X.