OpenAI, Anthropic, and others will be bought for cents on the dollar.
OpenAI is existential threat to all big tech including Meta, Google, Microsoft, Apple. Hence, they're all spending lavishly right now to not get left behind.Meta --> GenAI Content creation can disrupt Instagram. ChatGPT likely has more data on a person than Instagram does by now for ads. 800 million daily active users for ChatGPT already.
Google --> Cash cow search is under threat from ChatGPT.
Microsoft --> Productivity/work is fundamentally changed with GenAI.
Apple --> OpenAI can make a device that runs ChatGPT as the OS instead of relying on iOS.
I'm betting that OpenAI will emerge bigger than current big tech in ~5 years or less.
> Apple --> OpenAI can make a device that runs ChatGPT as the OS instead of relying on iOS.
Yeah... No they can't. I don't agree with any of your "disruptions," but this one is just comically incorrect. There was a post on HN somewhat recently that was a simulated computer using LLMs, and it was unusable.
I think it should be obvious just from thinking about how much more an OS is beyond the UI for launching programs.
Not to mention you would need an order of magnitude improvement in on-device inference speed to make this feasible at current smartphone costs. Or they could offload it and sell an insecure overpriced-subscription laggy texting device that bricks when you don’t have cell service…
It isn't going to happen soon. Maybe 4-6 years from now.
But it is clearly the direction. Apple will try to stave off this move by turning iOS more into an LLM as well.
I find myself doing more and more inside ChatGPT. When ChatGPT inevitably can generate GUIs on the fly, book me an uber, etc. I don't see why iOS wouldn't have competition.
OpenAI has no technical moat (others can do what they do), generate content, all have the same data.
OpenAI does not expect to be cash-flow positive until 2029. When no new capital comes in, it can't continue.
OpenAI can's survive any kind of price competition.
They consistently have the best or second best models.
They have infrastructure that serves 800 million monthly active users.
Investors are lining up to give them money. When they IPO, they'll easily be worth over $1 trillion.
There's price competition right now. They're still surviving. If there is price competition, they're the most likely to survive.
They have <a really expensive> infrastructure that serves 800 million monthly active <but non-paying> users.
Even worse, they train their model(s) on the interactions of those non-paying customers, what makes the model(s) less useful for paying customers. It's kind of a "you can not charge for a Porsche if you only satisfy the needs of a typical Dacia owner".
I give more of my data to OpenAI than to Meta. ChatGPT knows so much about me. Don't you think they can easily monetize their 800 million (close to 1 billion by now) users?
Meta has the giant advantage that other people interact with your data. I think that is widely more valuable than what chat engines have.
Given that OpenAI has publicly stated that they're working on monetizing free users (ads), I think they can make ads targeting as good as Meta can.
This is why Meta is all in on AI by the way. With nearly 1 billion users, ChatGPT is a huge threat to Meta's ad empire.
>With nearly 1 billion users, ChatGPT is a huge threat to Meta's ad empire.
People are on Facebook to interact with other human beings, not LLMs. People won't leave Facebook to use ChatGPT.
https://openai.com/index/group-chats-in-chatgpt/
Creeping. ChatGPT is becoming more than just a "talk with an LLM" app.
> Don't you think they can easily monetize their 800 million [...] users?
I am pretty sure they will be able to monetize it. But there is a big difference between "generating revenue" and "generating profit". It's way cheaper to put ads between posts of your friends (like FB started out with ads) then putting ads next to the response of an LLM. Because LLM responses has to be unique, while a holiday photo of yours might be interesting for all of your friends, and LLM inference is quite expensive, while hosting holiday photos is cheap. IMHO this is the reason why the 5th generation of ChatGPT models try to answer all possible questions of the world in one single response, kinda hoping that I am going to be happy with it an just close the chat.
> Investors are lining up to give them money. When they IPO, they'll easily be worth over $1 trillion.
Your premise is that there is no bubble. We are talking about what happens when bubble bursts. Without investor money drying out there is no bubble.
I think we are in 1995 of the dotcom bubble for AI.
Clearly, a lot of people here disagree with you. Doesn't mean you cannot be right, but in general, the HN crowd is a pretty good predictor of the trends in the tech industry.
Bitcoin is going to be the next universal payment system anytime now...
Weird example to trot out as a bubble when at any point in its history, if you held for a few years or so you’d be pretty far ahead on your investment. It clearly shows people are awful at calling out bubbles.
The mass is usually wrong on predicting these kinds of events. I don't see why HN is any different than Reddit group think.
Nobody was predicting for the dotcom or the financial crisis bubbles. The fact that everyone and their grandma is calling this a bubble makes me think that it simply can’t be.
More like 1998
I asked a few weeks ago if we are at the Pets.com stage of the bubble yet.
> They consistently have the best or second best models.
This is the problem with your original argument. It assumes that having a "good model" (e.g. one that performs well on some benchmarks) has somehow to do with something in the real world. It doesn't. If you can show that it does, your thesis might have at least a glimmer of credibility.
The idea that a chatbot will somehow displace an operating system is the kind of absurdity that follows from making this error.
What if investors stop giving them money before they IPO?
I’ll HAPPILY bet that it won’t. $10,000 to a charity of each other’s choosing?
OpenAI has yet to make a single, solitary thing that works well. It's nothing but Sam Altman hyping things. They aren't an existential threat to anyone.
ChatGPT 3 and 4 were impressive and kind of kicked off the current AI boom/bubble. Since then though, Altman changing the non-profit OpenAI into a kind of for profit Closed AI seems to have led to a lot of talent leaving.
> Apple --> OpenAI can make a device that runs ChatGPT as the OS instead of relying on iOS.
Or, instead of spending billions training models that are nearly all the same, they instead take advantage of all the datacenter full of GPUs, and AI companies frantically trying to make a profit, many most likely crashing and burning in the process, to pay relative pennies to use the top, nearly commoditized, model of the month?
Then, maybe someday, starting late and taking advantage of the latest research/training methods that shave off years of training time, save billions on a foundation model of their own?
I don't think it makes sense for Apple to be an AI company. It makes sense for them to use AI, but I don't see why everyone needs to have their own model, right now, during all the churn. It's nearly already commodity. In house doesn't make sense to me.
> I'm betting that OpenAI will emerge bigger than current big tech in ~5 years or less.
I seriously doubt it. If this bubble pops, the best OpenAI can hope for is they just get absorbed into Microsoft.
>Apple --> OpenAI can make a device that runs ChatGPT as the OS instead of relying on iOS.
Ah yes, PromptOS will go down in the history books for sure.
No, LLMs are an existential threat. OpenAI is a heavily leveraged prop-model company selling inference, which often has a model a few months ahead of its competitors.
AI isn’t bullshit, but selling access to a proprietary model has certainly not been proven as a business model yet.