Remember that the web also had a bubble that popped and look at where are we now with Google, Amazon, Meta...
I think that there is a bubble but it's shaped more like the web bubble and less like the crypto bubble.
Remember that the web also had a bubble that popped and look at where are we now with Google, Amazon, Meta...
I think that there is a bubble but it's shaped more like the web bubble and less like the crypto bubble.
As with any investing there's a risk appetite/timescale component to thinking about this stuff. Lots of companies went to zero in the dot-com bubble. Even Amazon was down over 90% between the end of 1999 and late 2001, and took until 2007 to recover to its high. NASDAQ overall took 15 years to return to its March 2000 high. Some incredible returns to be had if you waited it all out, to be sure, but it's hard to know what the interim looks like.
It's taken Cisco 25 years to recover
Intel never recovered. Well they did if you count dividends but still..
Yeah, only those evolved a lot from the initial products everyone hyped and products people hyped in 2000 are extinct or free. And I still don't understand where Facebook makes money. :)
Regarding LLMs there are two concerns - current products don't have any killer feature to lock in customers, so people can easily jump ship. And diminishing returns, if there won't be a clear progress with models, then free/small, maybe even local models will fill most of people needs.
People are speculating that even OAI is burning more money than they make, it's hard to say what will happen if customer churn will increase. Like for example me - I never paid for LLMs specifically, and didn't use them in any major way, but I used free Claude for testing how it works, maybe incorporating in the workflow. I may transitioned to the paid tier in the future. But recently someone noted that Google cloud storage includes "free" Gemini Pro and I've switched to it, because why not, I'm already paying for the storage part. And there was nothing keeping me with Anthropic. Actually that name alone is revolting imo. I wrote this as an example that when monsters like Google or Microsoft or Apple would start bundling their solutions (and advertise them properly, unlike Google), then specialized companies including OAI will feel very very bad, with their insane expenses and investments.
> And I still don't understand where Facebook makes money. :)
If that's a genuine question: Facebooks sells ads, information and influence (eg. to political parties). It's a very profitable enterprise. In 2024 Meta made $164B in revenue, and they're still growing at ~16% year-over-year.
[0] https://investor.atmeta.com/investor-news/press-release-deta...
You don’t understand how the world’s 5th largest company by market cap makes money and this is evidence of… something?
That was a joke, mostly unrelated to the main point - about LLM corporations' finances.
“Web” is such a broad category. Quite a leap from LLM wrappers.
Well, LLMs are themselves very broad. They encompass everything from web search to everything that you could automate yourself but don't have the time.
I don't LLM capacities have to reach human-equivalent for their uses to multiply for years to come.
I don't LLM technology as it exists can reach AGI by the simple addition of more compute power and moreover, I don't think adding computer necessarily is going to provide proportionate benefit (indeed, someone pointed-out that the current talent race acknowledges that brute-force has likely had it's day and some other "magic" is needed. Unlike brute-force, technical advances can't be summoned at will).
"Brute force" is only held back by economics and hardware limitations.
There are still massive gains to be had from scaling up - but frontier training runs have converged on "about the largest model that we can fit into our existing hardware for training and inference". Going bigger than that comes with non-linear cost increases. The next generations of AI hardware are expected to push that envelope.
The reason why major AI companies prioritize things like reasoning modes and RLVR over scaling the base models up is that reasoning and RLVR give real world performance gains cheaper and faster. Once scaling up becomes cheaper, or once the gains you can squeeze out of RLVR deplete, they'll get back to scaling up once again.
> Well, LLMs are themselves very broad.
I think overstating their broad-ness is core to the hype-cycle going on. Everyone wants to believe—or wants a buyer to believe—that a machine which can grow documents about X is just as good (and reliable) as actually creating X.