Counterpoint: in my consulting role, I've directly seen well over a billion dollars in failed AI deployments in enterprise environments. They're good at solving narrow problems, but fall apart in problem spaces exceeding roughly thirty concurrent decision points. Just today I got involved in a client's data migration where the agent (Claude) processed test data instead of the intended data identified in the prompt. It went so far as to rename the test files to match the actual source data files and proceed from there, signalling the all clear as it did. It wasn't caught until that customer, in a workshop said, and I quote "This isn't our fucking data".
Using LLMs in many cases is a crypto-fad bubble, and people are throwing everything at the wall to see what sticks. There are a ton of grifters and twits, as well.
There's the AI industry, which you engaged with, which is more or less a flailing attempt to capitalize on the new technology, and which yields some results but has seen quite a staggering number of flops.
There's also the AI technology - progress in AI is on an exponential trend, tied to Moore's law, and has trillions of dollars of impetus in play, nearly completely decoupled from the market in general - I think we'll see at least a decade of progress increasingly accelerating, with massive world models and language models built on current architectures, but from a technical point of view, I believe we're only a couple breakthroughs from getting a truly general architecture.
The worst case scenario for AI is having to wait on sensor technologies and scans of the human brain. At some point, we will have a good enough, explicable, and analyzed model of human neural function and connectomes to create AI models that operate in the same way that the brain processes information.
We're probably 20 years or less from that point - the reason I say this is because of the fact that nearly all brain tissue is generalized - you don't have one type of mechanism for sight, another for thinking, another for feeling happy, another for remembering things - it all runs on the same basic substrate. Every time we map out a cubic millimeter of tissue, we're making progress towards understanding the algorithms by which we experience and process the world.
On the software, side, though, I suspect we're within a few years - one person with a profound insight will be able to make the leap between whatever it is that humans do and the way in which some AI model is processing, and put that insight into algorithmic form. There might be multiple insights along the path, but it is undeniable that progress is happening, and that the rate of progress is increasing day by day. AI just might already be capable enough to make that last little leap without human intervention.
We're in brute force territory, with massive ChatGPT and Grok models requiring billions of dollars of infrastructure and systems in place to achieve.
In 20 years, stuff like that will be achievable by an ambitious high school computer lab, or a rich nerd building things for kicks.
You can effectively put all of the text of the internet onto a dozen 2TB microSD cards. Throw in the pirate data sources, like scihub, pirated books, all the text out there, and maybe it'll take 20 of those microSD cards. $5k or less and you can store and own it all.
A phone in 2045 will have compute, throughput, and storage comparable with a state of the art GPU and server today, and we're likely to optimize in the direction of AI hardware between now and then.
The current AI startup bubble is going to collapse, no doubt, because LLMs aren't the right tool for many jobs, and frontier models keep eating the edge cases and niches people are trying to exploit. We'll see it settle into a handful of big labs, but those big labs will continue chugging along.
I'm not betting on stagnation, though, and truly believe we're going to see a handful of breakthroughs that will radically improve the capabilities of AI in the near term.