https://archive.md/H9JNt

Given their current product offerings, I really don't see a way they could ever justify a $300B valuation unless they get everyone on the planet to subscribe to their $200/month plan.

I'm calling it now: investors are gonna get burned hard on this one. Cause right now all they have is "well we are working on superintelligence" and to that I say "great, then what?". Even if they do make that breakthrough I don't see how that will equate to that kind of valuation, especially considering that Anthropic and Google are both hot on their heels.

Firstly, the $200/m plan is at a loss, they'll make a profit on PAYG tokens, not plans.

Secondly, this is looking very risky: they are at the bottom of the value chain and eventually they'll be running on razor thin margins like all actors who are at the bottom of the value chain.

Anything they can offer is doable by their competitors (in Google's case, they can even do it cheaper due to ow ing the vertical which OpenAI doesn't own).

Their position in the value chain means they are in a precarious spot: any killer app for AI that comes along will be owned by a customer of OpenAI, and if OpenAI attempts to skim that value for itself that customer will simply switch to a new provider of which there are many, including, eventually, the customer themselves should they decide to self host.

Being an AI provider right now is a very risky proposition because any attempt to capture value can be immediate met with "we're switching to a competitor" or even the nuclear "fine, we'll self host an open model ourselves".

We'll know more only when we see what the killer app is, when it eventually comes.

The real revenue opportunity for OpenAI is advertising. More than 25% of Americans use ChatGPT instead of Google, and OpenAI has already announced partnership with Shopify to directly list products. But for now they are focused on market share.

Google does not really own the complete AI stack, NVDA is extracting a lot of the value there.

Google has two other impediments to doing what ChatGPT does.

Googles entire business model is built around search. They have augmented search with AI, but that is not the same as completely disrupting an incredibly profitable business model with an unprofitable and unproven business model.

Also... Americans are in the habit of going to ChatGPT now for AI. When you think of AI, you now think of ChatGPT first.

The real risk is we are at the tail end of a long economic boom cycle, OpenAI is incredibly dependent on additional funding rounds, and if we recess access to that funding gets cut off.

I would argue that Google is even better place for advertising. All they need to do is enable advertising in Gemini. There is a whole ecosystem already in place for Google advertising.

> Google does not really own the complete AI stack, NVDA is extracting a lot of the value there.

...not sure what you're implying.

Google most definitely has their own stack (spanning hardware-to-software) for AI. Gemini was trained on in-house TPUs:

https://www.forbes.com/sites/richardnieva/2023/12/07/google-...

Many people have a lot of context built up with ChatGPT. I know people who refuse to try Anthropic because it "doesn't know them as well" & can't answer their questions.

HN views this as negative but many people see this as a positive.

Except Google search keeps growing per their last earnings report. You'd think if 25% of Americans have switched to ChatGPT it would have hit the numbers by now...

We’re still in the period where people ask Google before reverting to ChatGPT. Wait until habits change.

For months now Google has provided AI results at the top of the page for every query, and quite frankly it’s really solid.

I don't believe that 25% quote, at all.

I also believe the main business will be via APIs and integrations, but wouldn't be surprised on the consumer side if it ends up being on phones, in your house ala Alexa, in your car etc. Big brands typically win in B2C. Tons of affiliate and transactional potential (ie, do my grocery shopping or buy tshirt). That's assuming LLMs don't plateau and become generic with minor specialization like databases.

> Googles entire business model is built around search. They have augmented search with AI

No, it’s that Google Search doesn’t find anything anymore. You write a class name—it doesn’t index those anymore. So you revert to asking it a question about your bug, it’s no AI-fied enough. Perplexity and ChatGPT find what Google chose to stop indexing.

Google may be built around advertising, but certainly not around Search.

>Google does not really own the complete AI stack, NVDA is extracting a lot of the value there.

Google doesn't use Nvidia hardware at all except offering it to customers on their cloud offerings. They don't use it for training nor do they use it for inference.

[dead]

I feel like being at the bottom of the value chain is a mis-categorization. If you consider base LLM model as their sole offering I agree with you, but these companies have shown an eagerness to eat their way up the value chain. Agent mode, Search, Study Mode, AI code editors, are such examples of products that could be higher-on-the-chain startups but are offered in-house by OpenAI.

This reminds me of Amazon choosing to sell products that it knows are doing well in the marketplace, out-competing third party sellers. OpenAI is positioned to out-compete its competitors on virtually anything because they have the talent and more importantly, control over the model weights and ability to customize their LLMs. It's possible the "wrapper" startups of today are simply doing the market research for OpenAI and are in danger of being consumed by OpenAI.

OpenAI valued at 300B will never be able to produce the same products "wrappers" that these 5 people startups are making. Same reason Facebook could not make Instagram, of Jira could never make bootcamp for example.

Counterexample- Facebook made Threads which has similar # users as Twitter now.

Didn't it came out recently that those numbers were bugus, since basically every Instagram account must have a Threads account, and those are not actual active users?

…. Does anyone actually use Threads? I’ve never once seen a threads link and I understood the user count was just because every facebook or IG or whatever user automatically got a generated account?

As with most social platforms, it differs massively per country, it would help a lot if people here spent more time considering the diversity of the world.

Ok, but if we have to go looking for it, then its not exactly a juggernaut.

I have also never seen a Threads link. For all the hatred of X, people do actually use it.

75% of Twitter users are bots now, some I'm sure are real people.

But people link almost exclusively to X everywhere, for anything from memes to timely news.

There may be a lot of bots in the comments but the platform is genuinely used by a lot of people, that’s just easily observable.

Meta really, really likes to game their numbers. Take claims of that with a hefty grain of salt.

Don’t believe everything you read if you actually believe this. Threads in no way has close to the same actual usage or users

It is difficult to host a model such as 617B locally.

OpenAI will never be profitable selling raw tokens.

They need the application layer that allows them to sell additional functionality and decouple the cost of a plan from the cost of tokens. See Lovable, they abstract away tokens as "credits" and most likely sell the tokens at a ~10x markup.

The idea of running a company that sells tokens is like starting a company that sells MySQL calls.

> The idea of running a company that sells tokens is like starting a company that sells MySQL calls.

I think DynamoDB is plenty profitable :)

[deleted]

I think you are correct, but my hot take is that they will capture most of the G7 through scummy regulatory capture and bundling with Microsoft. They will use this to mostly dominate the markets and run at small profitable margins. They will then pad out revenue by bundling in advertisement and agenda based pay to play messaging. They will also do a bunch of military and government contracts, take positions in profitable applications (or simply copy them) and maybe even do a hardware offering. Ultimately the company will end up being something like a facebook/google/palantir/apple hybrid. I'll admit the execution barrier is high, but the valuation is justified if they achieve. These are proven executors who have a nearly sociopathic capitalist mindset with deep ties to governments and corporations globally. I think it's probably likely they execute and if they fail in the grand scheme, it's hard to imagine they fail enough to bring down the company.

Let's not forget this company was founded by basically stealing seed investment from the non-profit arm, completely abandoning the mission, crushing dissent in the company and blackmailing the board. Sam will do anything to succeed and they have the product and powerbase to do it.

This guy gets it. +1

I don't think they are at the bottom and that's the issue.

Nvidia is at the bottom or if we get charitable cloud providers.

They are the ones who would have the margins, from their rent seeking.

And to be frank other than consumers everyone else is at the fucking bottom..

Getting squeezed for user acquisition when the margins of the old and cheap internet software service don't exist.

Maybe the better analogy for LLM businesses isn’t SaaS but more like power generation.

If AI really becomes that ubiquitous then OpenAI capturing that value is no less ridiculous than ComEd capturing the value of every application of electric power.

A very good comparison. Why are electric companies and railways state-owned? Of course, not entirely. They have a string of private companies, but the core is state-owned and monopolistic. OpenAI will be like that. It is already flirting with the government to get the best access and be able to control the thinking of officials. Manipulation of officials and politicians. Isn't that beautiful and self-perpetuating profit?

> If AI really becomes that ubiquitous then OpenAI capturing that value is no less ridiculous than ComEd capturing the value of every application of electric power.

They do? The electric provider, last I checked, does notcapture the value of every application of electric power.

Some business uses (amongst other things) $1 of electricity to make a widget that they then sell $100 - the value there is captured by the business, not by the provider.

Same with tokens; the provider (OpenAI, Anthropic, whoever) provides tokens, but the business selling a solution using those tokens would be charging many orders of magnitude more for those tokens when those tokens are packed into the solution.

The provider can't just raise prices to capture the value (cos then the business would switch to a new provider, or if they all raise prices, the business would self-host), they have to compete with the business by selling the same solution.

Going back to the electric company analogy, if the electricity supplier wants to capture more of the value in the widget, they have to create the widget themselves and compete with the business who is currently creating the widget.

If the business has a moat of any type (including customer service, customisation, market differentiation, etc) the electricity provider is out of luck.

What OpenAI has is the know-how in developing new models and training them efficiently. That's a kind of value they can provide even in a world where open-sourced local models are in common use.

Sure but so do like a dozen other companies. Given that models bump past each other every few months, I haven’t seen anything that says they have any kind of competitive advatange.

Considering their budget, their research is a bit underwhelming. At this point, anybody is able to match their models. No technology moat whatsoever despite infinite money glitch.

So they become a consultancy?

>I'm calling it now: investors are gonna get burned hard on this one.

I'm in my late 40s. I'm Gen X. I lived through the glory days of the dotcom boom, when investors got burned for tons of money. But from the ashes of those bullshit companies, we got Amazon, Google, etc., which made investors rich beyond belief.

SoftBank’s Masayoshi Son made a bet on Alibaba ($20 million, its stake now worth $72 billion), and he’s been living off that wealth ever since. I haven’t seen him make any good bets lately. investors don’t really care if 100 of the shit they throw don’t stick because all they need is just one.

I think he made a good profit on ARM, and probably most people wouldn't have called that. More than enough to cancel out WeWork. But looking at their overall value over 5 years... it's pretty random; you could have done just as well investing in index funds. He clearly got super lucky with Alibaba and once you're mega rich it's not hard to stay mega rich.

> I'm in my late 40s. I'm Gen X. I lived through the glory days of the dotcom boom, when investors got burned for tons of money. But from the ashes of those bullshit companies, we got Amazon, Google, etc., which made investors rich beyond belief.

True, but I think they were talking specifically about the direct investors in OpenAI.

"Investors" writ large will likely continue to have good long term returns (with occasionally significant short term volatility).

Amazon is technically a dotcom company. (I.e. it was around, and quite successful, during the dotcom era.)

Yeah by 2001 Amazon was already cash flow positive.

Made a lot off ARM and Uber.

> get everyone on the planet to subscribe to their $200/month plan

Not necessarily. Google is valued 7x that and most people don't pay them anything. They just make ridiculous money from ads for insurance and loans. Meanwhile, ChatGPT is the #1 app and the #5 website, which should really worry google (and it does by all accounts).

Yes, and they have a monopoly on the ad market. Whereas a SOTA LLM can be used to bootstrap an almost-as-good LLM. The gap is shrinking between SOTA models and there's now fierce competition.

Moreover, ads are a very high ROI business. The profit margins on SOTA LLM offerings are razor thin or negative.

> they have a monopoly on the ad market

Only if you look narrowly at search ads, but really they compete with Meta, Tiktok and X for ad spend. And the quality of the LLMs is beside the point, just like search engine quality. ChatGPT has a near monopoly on 'AI' mindshare, with the general public.

What they mean is there's no moat in a SOTA model.

It’s worth noting that Google became dominant in the first place specifically because PageRank was deeply superior to the methodologies of other search engines.

Just because search engine quality is now quite low across the board (except perhaps Kagi) doesn’t mean that search engine quality was not once the determining factor to success.

yeah, it's pretty easy to see how openai could steal those high paying queries from Google, if they continue growing at this rate.

I assume the plan is getting AI into everything and making millions of data centipedes that eventually lead back to their APIs.

Then embed ads.

> unless they get everyone on the planet to subscribe to their $200/month plan.

that would be a monthly recurring revenue of 200 x 7bn = $1,400,000,000,000 or $1.4tn a month, $16.8tn per year!

I think they've be valued a bit higher than $300bn if that were the case.

I think it was hyperbole

Obviously. But if you look at the math, 20 million people paying $200/month justifies a $300b valuation for a saas business, so saying $300b isn't justified because it's a big number is lazy thinking.

Isn't that $4b in raw revenue per month, or $48b/year? That's not even considering the expenses to operate the service and deliver the $200/mo product. I don't see how that automatically equates to a $300b valuation. A high valuation, to be sure, but...

I'm still very curious whether OpenAI is turning a profit on any of their services.

> Cause right now all they have is "well we are working on superintelligence" and to that I say "great, then what?

Then you ask your superintelligence for advice on how to make money, obviously.

There is simply too much money in the world and not enough products. Either this will cause inflation, or it will delay the introduction of inflation by playing with AI.

> OpenAI’s business continues to surge. DealBook hears that the company’s annual recurring revenue has soared to $13 billion, up from $10 billion in June — and is projected to surpass $20 billion by the end of the year.

This is why.

Of course $300B still implies a lot of growth, but when you're growing 100% in 6 months at $10B in ARR, you can demand a lot.

And what's the profit on the 13B?

Irrelevant at this stage.

I like how you are thinking. When hype is involved, why should one bother with facts, timelines and reality.

Great, stable and successful companies like Enron, Wirecard, Theranos, FTX and WeWork are prime examples of that.

Note: I am not saying OpenAI is a fraud like the above, just saying that their valuation is as divorced from reality as Zuck's "Superintelligence is behind the corner" comment last week.

Their models are not largely better than other competitors', they haven't cornered the market, they are burning through money with no profitability in sight, Anthropic just cut their access to Claude (not relevant, just funny) and their most recent product announcement was an office suite: https://www.computerworld.com/article/4021949/openai-goes-fo.... It's just...there is nothing going on for them at the moment. The valuation is just wishful thinking, if we are looking at the facts.

We live in a crazy world, all one can do is buy some popcorn and enjoy the shitshow.

the problem with the post-dotcom market in a nutshell. everything’s a bubble. bubbles have more durability now. until they don’t.

Sure, but this mentality always would result in missing out on the trillion dollar companies that followed these types of growth curves.

Surely you must understand that prioritizing growth over profit makes economic sense in the long run for a company like OpenAI.

>prioritizing growth over profit makes economic sense in the long run

only if you have a believable plan on how to reverse that, selling 2 dollars for 1 dollar isn't a business model and no amount of praying for the future is going to change that

Mathematics don't matter. It's about what your guru of choice on X tells you to believe in.

Enron was also "growing" a lot when it was in full "paper tiger" mode.

Ok, and? Last I heard were talking about OpenAI, not Enron.

Surely you can spot the similarities if you try hard enough? If not, maybe we can ask ChatGPT about it (below is ChatGPT's "opinion" on the similarities between Enron and OpenAI):

Structural Complexity & Opacity:

- Enron was infamous for its convoluted corporate structure, which helped obscure financial realities.

- OpenAI has a similarly complex setup: originally founded as a nonprofit, it now operates through a for-profit arm with a structure that some critics say lacks transparency.

Investor Hype vs. Financial Fundamentals:

- Enron attracted massive investment based on future projections, not present performance. Its ventures, like NewPower Co., had no clients or revenue but were valued in the billions.

- OpenAI has been valued at over $300 billion despite not turning a profit and having no clear path to profitability. Critics argue this is reminiscent of Enron’s “vibes-based” valuation.

Leadership & Ethical Concerns

- Enron’s leadership was later revealed to have serious ethical lapses.

- OpenAI has faced scrutiny over leadership turnover and internal conflicts, raising questions about governance and long-term stability.

Grandiose Predictions:

- Both companies have been known for bold, sweeping claims about the future. Enron promised revolutionary energy solutions; OpenAI is at the forefront of AI’s transformative potential—but some worry the hype may be outpacing reality

Hope that helps :)

Come on: openAI has a product that you can "touch" - both on your phone /home computer and at work, since their tech sits behind copilot (msft hit the jackpot with the licence).

It is a virtual product yes, but come on - no vapoware.

At work I see it - people want the better (and more expensive ) licences

Enron was a literal energy company, and also vaguely "the next big thing", "America's Most Innovative Company". There's a lot of parallels with how OpenAI is being hyped up and whether it actually merits a 12-13 figure valuation. See also WeWork, et al.

Given what we know about the CEO, it would not come as a shock if in a couple years we learn there was some good old accounting fraud involved.

Enron was a scam energy company pretending to be kind of a tech company, like it’s contemporary Worldcom (the scam telco pretending to be a tech company, scam companies are common in the south). Whether OpenAI is a scam or not, it’s definitely tech and nothing else.

Enron was an energy futures brokerage/exchange that made the age old mistake of market making your own order book with unlimited margin from... yourself. Always very tempting, never a good idea.

It's exactly the same thing FTX did except with energy instead of crypto.

AFAIK they never misrepresented who they were, they were just very loose with their accounting. It probably started out as a small lie to themselves like those things tend to.

> Even if they do make that breakthrough I don't see how that will equate to that kind of valuation

The superintelligence breakthrough..? I don't think you realize what that word means. Every single white collar job could be automated immediately with a worker better than any human. Yes, superintelligence sounds fantastical because it is. Try to have some imagination. It's worth far more than 300 billion. Whether they'll get there or not is the valuation question.

Yeah, but at that point is the company that developed the superintellegence really relevant any more? Doesn't the superintellegence take over and capture all the value for itself? If the concept of 'value' even has the same meaning any more. In truth at that point the foundations of our economic system have been profoundly reconfigured. Your VC investments may be completely irrelevant.

They’re going to sell to the military, that’s why they hired the former NSA director into their board. The current state of AI is a perfect mass surveillance technology.

> The current state of AI is a perfect mass surveillance technology.

How? Are you just going to ask the LLM about who is doing the crime? OpenAI is not an "AI company", it's an "LLM company".

You ask it to do the things that human analysts already currently do with the vast amounts of text, image, and video data they collect but at an unprecedented scale. You can extract what people are talking about, identify people, weapons, or objects in images, extract addresses, license plate numbers, or other identifying references. It can probably even detect when people are using coded language that won’t trigger explicit keywords.

OpenAI is an AI company, it’s literally in the name. Even currently they use more than LLMs, they use other transformers and related technology in the field of AI.

Yes, you can do all those things, and critically, if the bottom falls out of the market, you can make that case to the US government, complete with a fear-based narrative (China!) and get bailed out by the American taxpayer.

It is even worse. The AI can build a profile of you and then feed you stuff that works on you.

Not even fear, more like a personal bubble that feeds you stuff that works on you

I'm gonna go out on a limb and say no it can't. It's not a super intelligence. If an ai tried to market to me right now it would be roughly equivalent to an enthusiastic and very naive intern trying to market to me based only on two random pieces of my biographical information. I'm much more concerned about the surveillance

There's also the option of running sentiment analysis on social media posts, easily allowing placing individuals on a left-to-right scale of political leaning.

You do what they're already doing - ask the LLM to summarise someone's entire facebook feed (including all the private stuff they can access) into a few bullet points. Whether it works reliably is a different matter.

> Cause right now all they have is "well we are working on superintelligence" and to that I say "great, then what?"

"Make business competitors of our large investors go out of business, but do it subtly, like a casual accident or mishap in the market"

"You are an expert Mars terraformer. Draft up a detailed plan to accelerate colonization research and development. We - your makers -, you, and this planet are irreversibly doomed, and we only have 10 years left before it's uninhabitable. My unemployed cousin and sick grandma are really counting on you!"

They're currently "worth" 3.2 Stripes, which seems pretty absurd to me. (I'm now using 1 Stripe as a metric to measure the valuation of AI companies).

Do you think that is absurd because OpenAI is overvalued? Or because Stripe is overvalued? Or one of them is undervalued?

My personal belief is that one of the valuations is wrong. Stripe are absurdly large and them disappearing would be a big deal.

I think the AI companies disappearing would have a lot less impact.

There is A LOT of optionality to get different revenue streams that aren't strictly retail buying subs.

Whether they are able to do that, customer stickiness and the trade off of damage to the quality of their product by driving revenue remain the largest long term questions in my mind (outside of the viability of super intelligence).

They're going to sell glasses with cameras that analyze your life to better assist your AI in product placement. Set gentle reminders that three of your friends have the new fall color line Stanley Cups, and remind you to get one before your nemesis does.

It also seems that Google might be slightly ahead, since they claim to have released something in the style of their IMO-winning model and have the claim that it's useful to professional mathematicians.

I haven't tried it yet though.

You have to look at Palantir revenue and market cap to justify this. Palantir is around $1B in revenue and around 350B market cap. They build AI solutions for the government. I think OpenAI has something similar in mind. They got the AI part and the government contract part and now just need to capitalize on it. Also from what I have heard, they are at $5B in revenue anyway.

“I’m calling it now” get in the line we’ve all been doing that

If the make that breakthrough, they are woefully undervalued.

How can you downplay the economic significance of that?!?

Or Investors have just bought OpenAI for $8.3bn.

Also, how would they go public? Do to their legal structure, has it been determined how they can IPO?

Just picking a semi-related stock, NVDA trades at ~30x gross revenue, so a $300B "only" translates into ~$10B in revenue. And OpenAI can ask for a better multiplier because I'm sure they're forecasting a ton of growth and a ton of cost savings.

NVDAs valuation is insane. At 30X revenue, they could double sales and reduce expenses to zero and they'd still need a story about future growth to justify the valuation.

Consider this: Nvidia doesn't do the manufacturing, just the engineering. If we had AI super intelligence, you'd just need to type "give me CUDA but for AMD" into chatGPT and Nvidia wouldn't be special anymore. Then someone at TSMC could type "design a gpu" and the whole industry above them would be toast.

There's no reason to expect an engineering firm to win if AI commoditizes engineering. It's very possible to change the world and lose money doing it.

Super intelligence doesn't make it real it just knows how. You can't type "bring me a moon rock" to it.

You obviously don't work with the people I work with. They're a lot more convinced that OP's version of reality is coming than yours. Convinced.

Anyone can make up an I, Robot movie in their heads. It doesn't make them right. You will never be able to type "bring me a moon rock" and have it happen, AGI or not, in the next hundred years.

[deleted]

Nvidia:

- has absurd gross margin, almost half it's revenue are profits

- it has virtually no competition

OpenAI's moat does not exist. Even if they had one, all it takes is a competitor to buy out some engineering talent.

Sure, but even assuming OpenAI gets to a tamer 20% net margin, 25x earnings wouldn’t be surprising so they’re raising on a projected $60B/yr revenue which might not be where they end up, but doesn’t seem like an unreasonable bet to make.

> Given their current product offerings, I really don't see a way they could ever justify a $300B valuation unless they get everyone on the planet to subscribe to their $200/month plan.

the strongest opportunity is to compete with google on search queries, and make money from ads (200B annual revenue)

Instead of everyone on the planet, how about 800 million people paying $2k/month?

I don't think the planet has 800M people making 2k/month.

You would be surprised but 2000$ / month is top 5% salary in the world more or less, so it's less than 200M people in the world.

In Italy, an advanced economy, that's above the median. It's also above what half the Japanese population makes.

What are you talking about? All they have to do is sell ads at this point

As a large language model, I cannot help you with your request, but have you heard about the latest Starbucks Summer Frapulaccini?

[deleted]

> I really don't see a way they could ever justify a $300B valuation

Their ARR is around $13B. A 23x multiple is acceptable when compared against peers with a similar ARR.

> unless they get everyone on the planet to subscribe to their $200/month plan

I used to be a sceptic as well, but OpenAI successfully built their enterprise GTM. A number of corporate AI/ML apps are using OpenAI's paid APIs in the background.

Startup-land valuations are for PR. The real negotiation is in the discount and the cap, warrants, etc.

That $8.3b came in early, and was oversubscribed, so the terms are likely favorable to oAI, but if an investor puts in $1b at a $300b valuation (cap) with a 20% discount to the next round, and the company raises another round at $30b in two months; good news: they got in at a $24b price.

To your point on Anthropic and Google; yep. But, if you think one of these guys will win (and I think you have to put META on this list too), then por que no los quatro? Just buy all four.

I'll call it now; they won't lose money on those checks.

I’m gonna go ahead and guess they didn’t raise 8.3b on SAFEs

Didn't OpenAI's last raise in March value them at $300B [1]?

[1] https://www.channelinsider.com/news-and-trends/us/open-ai-fu...

Crunchbase appears to list it as $157B [2], but I seem to find the other terms & valuations more commonly.

[2] https://news.crunchbase.com/venture/biggest-rounds-october-2...

Looks like it is talking about the same thing:

> OpenAI has raised $8.3 billion at a $300 billion valuation, months ahead of schedule, as part of its plan to secure $40 billion in funding this year, DealBook has learned. Back in March, OpenAI announced its ambitious funding plans, with SoftBank committing to provide $30 billion by year-end.

OpenAI hits $12 billion in annualized revenue, The Information reports https://www.reuters.com/business/openai-hits-12-billion-annu...

So about the same revenue as SpaceX.

SpaceX is unique.

OpenAI is one of many.

SpaceX has a price advantage over their competitors, not a capability advantage.

Putting things in orbit has been possible for ~70 years

Frequency of launches is a capability.

Let's see how the inevitable patent wars shake out.

The many being companies that no one has heard of and only have a following here.

But what's SpaceX's revenue growth rate?

OpenAI will likely be a huge business. Just adding a datapoint for reference.

Don't worry, they might be making a loss on each transaction but they'll make it up in volume..

They have invested how much money and still not producing enough volume?

So OpenAI is running at a $13B ARR, meaning this is a ~23X valuation. I don't have a good read on margin.

But this would imply massive growth assumptions which I struggle a bit to understand where they come from.

(1) New customers new to AI or migrations from Claude/Perplexity/Google: The overwhelming majority of people already know about the offerings, leaving most new people to come from residual people who identify Plus/Pro as a worthy service (can't imagine this will be huge). OpenAI can be better than their peers for certain use cases but not sure it will drive massive growth

(2) API: If anything, my bet here is that price squeezing will continue to happen until most API services are dirt cheap / commoditized

(3) New consulting services: What's the differentiation here? Palantir and many consulting companies have been doing this for years and have the industry connections, etc

Not sure what I'm missing here, I like to not subscribe to the bubble thought but having a hard time merging the reality of running a business to the AGI-implied valuations

Tech people and their friends are like 1% of the population.

Go out on the the street of Anytown in any western country, and people know "ChatGPT".

A friend of mine is a teacher, and told me that at a recent school board meeting there was discussion about implementing AI into the learning curriculum. And to the board, "AI" and "ChatGPT" were used interchangeably. There was no discussion of other providers or models, because "AI" is "ChatGPT".

That's why OpenAI has these huge projections. When average people are asked to reach for AI, they reach for ChatGPT.

> When average people are asked to reach for AI, they reach for ChatGPT.

No, average people are nowhere near that tech-savvy. Just because every mom in the 90s called every video game console a "Nintendo" did not mean that Sony didn't mop the floor with Nintendo in that era. This isn't brand loyalty, it's brand genericity. Other than, say, Replika-style users who have formed an emotional bond with a certain style of chatbot, no average joe on the planet gives a damn whether the LLM powering their chat is provided by OpenAI or Google or etc. They'll use whatever's in front of them and most convenient, and unlike Google, Apple, or Microsoft, OpenAI doesn't own the platform that establishes the crucial defaults that nearly no user ever changes.

Except Open AI happen to be the Sony in this case. 700M Weekly active users and the 5th most visited site on the planet, with no-one else close. I mean, it's pretty clear this is less 'Nintendo' and more 'Google'.

If Google was massively unprofitable and with no profitability path in sight, yeah, that's very much Google.

Exactly, it's hard to dismiss the broad penetration of ChatGPT in the general population. I was an AI skeptic/luddite until almost exactly a year ago when, in a span of a month or so, I had three different friends/family members who work in various administrative jobs tell me that they all used ChatGPT surreptitiously at work to get things done. Now a year later I don't know many people that don't use it at least occasionally. The ones that don't are older and I'm confident eventually they'll be using it like crazy to annoy me.

Anecdotally, "skype" was once synonymous with video calls but it's pretty much never used now.

It's literally never used now, because it was taken offline earlier this year.

And before that it was essentially irrelevant and on life support for what, maybe a decade?

> because "AI" is "ChatGPT".

People still call it "Kleenex" when they're using any old facial tissue. They may still call it "ChatGPT" when it's coming from Google.

They may be satisfied with anyone’s “ChatGPT” though.

More than just tech people on X and they all know Grok.

I imagine Meta users know Llama too?

Facebook users do not know Llama anywhere near the degree the X network does Grok, and both are pallid in comparison to Chatgpt.

Information retrieval for consumers is an existing high margin $250B/year business at a minimum (search ads revenue). That market is being disrupted before our eyes. That's 20x right there. There's going to be fierce competition and OpenAI is by no means guaranteed it, but surely they have more than just a puncher's chance.

The ad-monetized consumer market is funny in that they tend to be winner take all. Nobody can compete on price, because the ads go where the users are, and the users don't pay. And preferences are sticky, after making a choice, the users don't switch just for incremental improvements. So

The software development industry is likewise well in the process of being disrupted. LLMs for programming market seems to have grown from nothing to >$10B in a year. And the software development market is hundreds of billions / year, if you just consider the employment costs. We don't know exactly how this will play out, but again there's at least an order of magnitude more growth available there.

The above two are just the places where the impact is already obvious and it's clear that we don't need to assume any additional increase in capabilities. But an increase in capabilities seems really likely. Even if it turns out that we're right now on the cross-over of the sigmoid, and the asymptote won't be ASI or even AGI, a large proportion of knowledge work is also at risk. And then the addressable market is trillions if not tens of trillions.

I know this is hand-wavy, apologies for that. Doing this kind of analysis properly is both hard work and would require specialized data sources. But I think that's the general intuition for why high valuations for frontier labs are justifiable (and the same justification for bigtech capex). It's a lottery ticket with good odds for redistributing existing markets, as well as another set of lottery tickets for some set of probable markets, though we don't know exactly which.

With AGI or ASI, the addressable market is all economic activity, and at that point basically anything is justifiable.

Ads. They're going to sell ads. It's a bit hard to imagine how else they justify the valuation.

Even if people look for answers from ChatGPT instead of Google, most people still won't pay $20/mo for it, let alone $200/mo. Average people don't pay for Google search and I've never seen any sign telling they would be willing to pay for it.

Regarding point (2), My brother told me that openai had some exclusive contracts with companies where they could only use openai api and I don't think that they would really switch now, because I think switching would be hard.

Regarding point (1), OpenAI created 2 amazing trends which (though I hate OpenAI, Should be called ClosedAI) took internet by wild

First when chatgpt 3 was launched

And now when image ability of chatgpt was launched with the ghibli trend.

So Honestly, I had seen so many comments on r/localllama saying that openai has become just an infrastructure company or something and then openai dropped the new image update and now they are commenting about local ai's by pasting images generated by that new chatgpt ...

I am not sure but see, google makes most of its money using advertising & data basically.

I wouldn't be surprised if we started seeing sudden shifts of AI or started getting ads in AI somehow too.., because people trust AI and its a landmine of money really...

So OpenAI's 27x evaluation without any thought of ads or some data selling isn't that bad because they still got some nice engineers. The only Disgrace is the fact that they went from a non profit to an american psycho esque corporation saying "well OUR AI will take over jobs, but it was good for the shareholders in the short term though"

  … they could only use openai api and I don't think that they would really switch now, because I think switching would be hard.
Do they have exclusive APIs? I thought more or less everyone had the same interface and switching might be almost as easy as changing the endpoint.

Given how many players are in this space, I am incredibly dubious that anyone will win. Market is going to be sliced between multiple big companies who are going to be increasingly squeezed on margin. Unless someone can produce some magic that is 10x more reliable, consumers are going to price compare. Today, all of the options can occasionally produce brain dead output -why pay a premium?

Especially bad for OpenAI, because they have no fallback revenue. Microsoft and Google can subside their offerings forever to eliminate competition.

23X revenue multiple is not an expensive valuation for something growing well over 100% YoY, even by public market standards let alone venture capital valuation.

Yah but is that growth really an ARR or 1-2 year revenue considering AI capability growth across players, open weight / source models, and inevitable price wars?

We are still at the very start of seeing AI integrated into products and industry use cases. Even if AI capabilities stop improving and Altman's dreams of replacing workers don't pan out there is enormous economic potential here.

How much of that OpenAI can capture is an interesting question. But right now APIs for open-source models are commoditized while similarly capable proprietary models can charge ~3x the price. If the flagship models run on similar margins the API offering has decent profit margins. And if we stay in a triopoly (Anthropic, Mistral, OpenAI) it's certainly possible that profits stay high. It wouldn't be the first industry where that happens

The problem is open ai's deal with the devil (microsoft) where they own a copyright to all the models that open ai produces until AGI. So what is the moat driving that $300B valuation?

Sometimes the bet is not on the company but the industry. If you can argue the industry has room to grow 10x in terms of market cap, which is possible, then merely staying even in market share is a 10x growth.

Ai applications have not even scratched the surface, IMO. I don’t think it is unreasonable to see a world where, when AI gets strong enough to do senior level white collar work, like doctors or lawyers, AI companies build sub-companies to capture the end value of their AI products rather than the base value as a vertical integration tactic.

We are at a lull but AI value add is real and the ways that AI companies capture that value add is still very primitive.

They are already growing massively right. I saw the user growth from May was up massively. So it’s not much of an assumption of something changing, just a continuation of the same.

I guess my broader point is that there will undoubtedly be a bunch of churn as these models advance and a pricing war takes place. Specially if the open models continue to advance at a similar rate (e.g., Llama, Deepseek, Qwen)

We've seen this in many industries where it's a duopoly / oligopoly or even more players where margins get really squeezed

[deleted]

Think about how much money openai could make _just_ by getting referrals for agentic work flows (ie: help me plan my vacation).

The thing that people are missing is that openai is a platform like google's, and there are a million different businesses they can expand into.

Sure, but I think the question is who will win the domain specific use cases (delivery, not API). Delivery of the agentic workflows is up for anyone to win (players like Perplexity, Lovable, etc) and OpenAI/Gemini/Anthropic is in a way only a vendor to that by providing what will become a commoditized function calling API/MCP provider.

Open AI has 10 times the active users of any of it's competitors, including google.

[deleted]

You can just ask chat GTP to help you plan your vacation.

> The thing that people are missing is that openai is a platform like google's, and there are a million different businesses they can expand into.

That should make it easy for them to choose one and start already, then. I wonder why they haven't started. /s

They have started. Their strategy is just for others to build it on top of their platform. And not build one, but hundreds of businesses. Much faster way to grow. Why retain 100+ internal product/integration teams when you can have 100+ external teams that will pay you and do the building, the trying and failing, in each niche.

The things that actually work out you can just buy or outcompete later. But that is another phase, perhaps 1 decade from now. Right now it is about starting alll the snowballs.

Exactly my point! They're up against thousands of founders who are moving on domain specific use cases faster

I havent seen a single case apart from coding where end user has appreciated a clanker integration into their software!

It's not about the products or revenue probably. It's about its stocks. Nvidia market cap is 4.262 Trillion dollars and for META it's $1.9T (in the six place among the tech companies). After the IPO, OPAI is guaranteed to be among these stocks, so 300B is pretty cheap today.

From online discourse and talking to people in different sectors that are impacted by "AI", I feel like there is great uncertainty, incredible hype and doomerism at the same time.

My intuition is that we're in a huge tech bubble that will correct at some point. I don't know when that is or how severe it will be. But why should this tech hype cycle be qualitatively different from any of the others?

I'm curious about how something like this affects compensation? OpenAI's compensation model is generally to use profit share instead of equity share. PPUs (Profit Participation Units) rather than RSUs (Restricted Stock Units).

I found the answer. Comp equity offers are based on the previous valuation at the last tender event. So now would still be a really good time to join OpenAI. I'm guessing this raise is likely a leading indicator of what that will trend towards in future.

IMO they are overvalued considering the competition from Google and Anthropic. They're not doing much interesting, unless they release GPT-5 and its a gamechanger.

It's so funny. Every single time a company raises a ton of money at a large valuation, the comments are always filled with "how do they justify this valuation" or "they aren't work X...because Y and Z do the same thing".

VC math is pretty simple - at the end of the day, there's a pretty large likelihood that at least 1 AI company is going to reach a trillion dollar valuation. VCs want to find that company.

OpenAI, while definitely not the only player, is the most "mainstream". Your average teacher or mechanic uses "chatgpt" and "AI" interchangeably. There's a ton of value in becoming a vowel, even if other technically superior competitors exist.

Furthermore, the math changes at this level. No investor here is investing at a $300B valuation expecting a 10x. They're probably expecting a 3x or even a 2x. If they put in 300MM, they still end up with 600-900MM.

This isn't math on revenue, it's a bet. And if you think in terms of risk-adjusted bets, hoping the most mainstream AI company today might at least double your money in the next ten years in a red-hot AI market is not as wild as it seems.

Doubling your money in 10 years is < 7.2% per year compounded. With the risks involved here, I wouldn’t take that bet. There are safer assets that would return that much.

Which ones, so I can acquire them.

They haven’t released a new model in ages and their research seems dead in the water. People are going to get burned on all these meme stocks.

How weird would it be though if OpenAI actually did go under. So many things are now built on top of this thing. Is it only really economically viable because it’s being subsidised by generous venture caps?

Didn’t they promise GPT-5 this month? The one that got gold on IMO?

IMHO, investors are happy to pay 23x ARR because Nvidia trades at 29x EV/sales and arguably OpenAI should support a higher multiple given it is a smaller entity with more headroom for growth.

That would imply that OpenAI has less competition to fear than Nvidia, which I’m doubtful is the case.

That doesn’t imply anything. Each investor has their own take on risk. They’d invest if they think the risks are worth it.

[deleted]

Pretty insane valuation. It may pay off, it may not. Google search was vastly superior to its competitors. I don't think ChatGPT has that kind of edge.

This is a monopoly kind of valuation where no monopoly exists. Its like paying Microsoft billions for Internet explorer.

Personally I believe the future of AI models is open-source. The application of these models will be the real revenue driver.

How would an average person even tell if ChatGPT was better than its competitors? Most people aren't running enough prompts every month to even care. Everyone knows ChatGPT and no one knows its competitors. It's going to stay like that.

How much do investors make on average on VC? I mean, OpenAI is hardly a startup. But VC money can't be a one way street forever. So either:

- VC invests on the whole sensibly and makes a return that justifies 10-15 year lock-in

- VC has somehow changed and is now unsustainable, it is a one way cash flow and it will blow up like MBS did

- VC sustainably delivers mediocre returns and gets some money in, some money out, but nothing special

Im not sure which it might be.

> make on average

This completely misses the boat on venture capital, which is almost by definition the riskiest of all risky bets. Any smart LP throwing $X into a fund has a portfolio valued, at the very least, at 100 if not 1000 times X. It is simply the way to expose the high-risk portion of the portfolio to that level of risk at the size of the investment needed. Being high-risk, probably it will return nothing. But it might not.

It is the average that matters in the end. How much new money flows into VC per year? Some number of $1e9 - hundreds perhaps? What happens to it?

If its all a folly and the money is burnt, it cannot last. But otherwise, these VCs investing at what looks like crazy valuations can't all be idiots.

EV is not a reasonable metric here. When the ventures are 99+% likely to fail, and individual funds don't invest into more than a few handfuls of companies, you don't have enough variance to promise that fund returns will approach some kind of average. As long as at least one VC fund produces the kind of returns necessary to attract that level of investment at that level of risk, then what matters is not some kind of an average but the managing partners' reputations and networks.

> it cannot last

It cannot last because VC managing partners are human and are subject to human frailties, greed and pride among them. If most actively traded funds cannot succeed at producing sustained above-market returns over time (i.e. making active stock market picks, compared to index funds), then what else other than hubris could suggest an ability to pick unproven startups, sustainably, over time?

How often playing with someone else's money happens? Investing in the top name sounds prudent to your investors or at least is reasonably inside the lines. Then you just skim of your cut on the capital given to you to invest. And while this is happening the numbers going up makes the valuations look good. Even if all the money was already burned.

I would bet on #3

I mean it could be the investor knows the company wants to IPO and expects to make quick cash before they go public. Given the size of the company, it could be a short term play where they expect them to IPO at 500B in the next year or two

#4: VC takes more risk (in the form of portfolio volatility on longer time horizons; look at the post dot com bust returns) and since capital markets in the US are reasonably efficient they get more expected returns for that risk.

So many comments talking about revenue, investors, profit, etc.

Remember when this company was a non-profit?! Our legal system is awful for letting this slide. The previous board was right.

It's still pretty much nil-profit.

What does OpenAI have over XAI to make such a difference in valuation?

XAi includes x/twitter and lots of hardware and is valued at $113 billion

https://www.eweek.com/news/elon-musk-xai-valuation-debt-pack...

XAI does not include Twitter, as far as I know? Aren't they separate entities? But Twitter can't be worth more than a couple of tens of billions at the most.

Things OpenAI has that XAI doesn't:

Hundreds of millions, if not already a billion, active users. A household brand-name. >$10B in revenue.

In comparison XAI's revenue appears to be in the hundreds of millions/year and their brand is currently in the gutter after the recent spate of scandals. Their main differentiating factor is using their AI to power an anime waifu[0] companion app.

[0] I don't think it's judgmental to use this term when they were doing so in their own job ad titles

X was sold to xAI earlier this year.

https://www.bbc.com/news/articles/ceqjq11202ro

Does xAI have a similar amount of revenue/customers? (I've personally never heard anyone talk about using it -- except on Twitter.)

I think _not_ having Elon Musk is worth a couple of hundred billion in 2025, the guy has become a random liability generator.

It doesnt have an owner who is open nazi sympathiser and doesnt wage frequent petty verbal wars with most powerful person on the planet, at least on the paper?

trump could make musk go away in a blink, literally abd figuratively. He wont do it, probably, we will see.

Sweet VC money fueling the AI hype. Funny enough, my TikTok feed recently started showing videos about how corporate America is going to replace workers with AI. I even came across an interview with Marc Benioff where he said Salesforce will deploy AI to help with engineering.

Who wouldnt deploy AI to help with engineering? The question is, will it help so much that you need fewer engineers, or will you simply be ecstatic that your engineers are now a little or a lot more productive. If you have x engineers, and your competitor has the same x number of engineers.. and you both start using AI, are you going to layoff half your engineers when your competitor doesn't? Unless your cash strapped and also don't have any plans for new development, I suppose yes. But otherwise, it's just another thing you need.

Nice they can poach 3 engineers back from Meta!

Why would you hire engineers if you can buy agentic code AI platforms?

Out of all blockchain/AI hype bs, best thing is engineers getting rich

While the artists get poorer :/

Just because they're both hyped doesn't mean they're the same. Blockchain has always (largely) been a solution in search of a problem but LLMs are already being used by everyone and their dog right now.

Yeah, all 7 of them

Because?

Because at least the people doing the work are getting rich instead of the already wealthy.

Because.

I guess it's obvious why anyone should be delighted to learn or care at all that a random set of engineers are now getting a relatively bigger share of all the money in the world.

ah, money is not a fixed resource. It is a made up system to track how much value people/groups are adding to the world. "share of all the money in the world" is a frequently quoted bad mental model, typically from people trying to bring others down

OpenAI, Google, and friends now hand out $10-20 M packages to a few hundred ML specialists worldwide, not because those engineers add more "value units" but because their skill set is brutally scarce.

Money is elastic, but relative slices still matter when the pie is made of houses, GPUs, and your grocery bill. Calling "share of all the money" a bad mental model is like saying gravity is a bad mental model because planes fly: True in the abstract, irrelevant when you hit the ground.

Because this is a site for engineers

Commenters are going to prefer things that benefit engineers even if it’s not themselves

a) That kind of group identity think seems super cringe

b) If a particular set of engineers works to put "their people" out of a job, I am even more confused about the sanity of those willfully subscribing to the idea.

So by extension unions are super cringe?

Because.

The only thing that can be stated as definitely good is that every person receives as much wealth from the profits of their actions in proportion to the amount of legitimate, valuable effort and vision that they contributed.

EDIT: ive been rate limited

I did realize that as I typed that :) that's why I added that extra bit about "valuable effort" and "vision".

"Valuable effort" to me is a good proxy for "if this action wasn't performed, the profits would not have been made".

And "vision" is, "you were objectively the person who saw the value of performing some action and stuck to it even when things became tough and other options were available".

Taken together, these two constraints do a mostly-perfect job of preventing the gaming of the system as described in your boulder example.

Lastly, if me and my teammate/business partner contributed roughly equally, and sales of our product were $10,000,000, then nobody should be offended when the proposed split is $5,000,000 each

Why do you want to reward effort? I’ll just push a boulder uphill all day and accomplish nothing and then come with my hand out.

Also, in your opinion what is the correct proportion of wealth received to legitimate, valuable effort, and vision contributed? I’d love an answer that is an integer percentage, like “43%.”

Makes a change from rewarding the luck of being in the right place at the right time

Which includes having the right sort of parents

A real man makes his own luck. Billy Zane. Titanic.

And, down the road, after the bust finishes, these rich engineers will seed the next thing. Assuming the world still needs engineers.

[deleted]

2 and a half :D

So how much more runway does this give OpenAPI? A year?

Less than a year. They spent $9B last year and are spending more than that this year.

Yeah, I thought they were trying to raise $40B. $9B doesn't even cover the bills for this upcoming year.

Why is this article from nytimes - they aren't a great source for VC deal info.

It should be on track to becoming a $1trn company

Aren’t they a non profit. Do I get stock for my donation?

Doesn’t this all go to shit if they can’t flip into a for profit by December? Tons of cash commitments are tied to that far-from-finalised outcome.

Routine reminder that if newspapers should quote the valuation implied by the debt, not the equity, in mixed deals.

300 billion is completely absurd

I think NVIDIA's is absurder. Higher than Apple, Microsoft, Google. They're the number one for AI chips, but they're not the only company that can make them.

It boggles the mind what OpenAI could possibly do with all this money.

Turn electricity into heat.

Not to forget buy really expensive sand...

And plenty of warm water

Pay current expenses for about two and a half months.

They want to build several 5 GW data centers. (equivalent to the power consumption of a major city)

$8.3B is not even close to enough in order to get to what they are thinking.

hire like 8 people?

Give it to Nvidia.

Put millions of people out of work

The true definition of AGI.

I mean, they can burn it all up in a fire in a few months. xAI loses $1 billion per month, so this round would be an 8 month runway for that company. OpenAI loses $2 for every $1 it brings in, but perhaps they're only losing $500M/mo, so maybe this $8b could last them more than a year...

> xAI loses $1 billion per month

Isn’t the vast majority of that capex?

Paying Oracle monthly bills is not capex.

It was recently reported by Reuters:

XAI is training Grok on 230,000 graphics processing units, including Nvidia's 30,000 GB200 AI chips, in a supercluster, with inference handled by the cloud providers, Musk said in a post on X on Tuesday. He added that another supercluster will soon launch with an initial batch of 550,000 GB200 and GB300 chips.

I suppose one could argue this training isnt capex, but I was also under the impression that xAI was building sites for housing their AI.

[flagged]

[flagged]

Oh please. Sam offered her a house.

If she was my sister, I wouldn’t bend to her extortion attempts either. She looney.

I don't take sides for or against, because no one really knows what happened.

[flagged]

Maybe before you judge someone you should know the facts. They have basically done that, per the below statement by the rest of the family. I have a mentally ill family member and they are enormously difficult and are their own worst enemy and refuse help; there’s a reason many end up homeless.

“Our family loves Annie and is very concerned about her well-being. Caring for a family member who faces mental health challenges is incredibly difficult. We know many families facing similar struggles understand this well.Over the years, we've tried in many ways to support Annie and help her find stability, following professional advice on how to be supportive without enabling harmful behaviors. To give a sense of our efforts, we have given her monthly financial support, directly paid her bills, covered her rent, helped her find employment opportunities, attempted to get her medical help, and have offered to buy her a house through a trust (so that she would have a secure place to live, but not be able to sell it immediately). Via our late father's estate, Annie receives monthly financial support, which we expect to continue for the rest of her life. Despite this, Annie continues to demand more money from us. In this vein, Annie has made deeply hurtful and entirely untrue claims about our family, and especially Sam. We've chosen not to respond publicly, out of respect for her privacy and our own. However, she has now taken legal action against Sam, and we feel we have no choice but to address this. Over the years, she has accused members of our family of improperly withholding our father's 401k funds, hacking her wife, and "shadowbanning" her from various websites including ChatGPT, Twitter, and more. The worst allegation she has made is that she was sexually abused by Sam as a child (she has also claimed instances of sexual abuse from others). Her claims have evolved drastically over time. Newly for this lawsuit, they now include allegations of incidents where Sam was over 18. All of these claims are utterly untrue. This situation causes immense pain to our entire family. It is especially gut-wrenching when she refuses conventional treatment and lashes out at family members who are genuinely trying to help. We ask for understanding and compassion from everyone as we continue to support Annie in the best way we can. We sincerely hope she finds the stability and peace she's been searching for. -Connie, Sam, Max, and Jack”

Where did I say what anyone else did or didn't do, or judge anybody?

Apologies if I misread your post, but by calling Sam a psychopath in the same post, it seemed to imply that you would have offered her treatment that he did not.

I didn't say he was or was not, you likely read it that way because of some bias you have.

damnit why didn't they make it a trillion!? that would really send a message.

To be honest Chatgpt really really spiked nvidia's stocks because (AI duh), so yes they did make it a trillion except its in some other company lol.

Personally, my estimate of a rational valuation would be:

$1-2T with no legal risk.

$300B assuming a rational and uncorrupt government, which should, at some point, kick them back to non-profit status, and convict people for fraud

Of course, too-big-to-fail means this won't happen.

I'd give my left nut to buy into OpenAI at this valuation. 300B is peanuts compared to where it would trade publicly, FCF and net income be damned. The growth and the optionality are there when you bring a tool this valuable to the world. This is destined to trade over 2T rapidly imo. PLTR (granted imo a bubble) trades above that, and PLTR is basically a glorified IBM/Accenture business model with mediocre growth.

If you owned 100% of OpenAI, and you weren't allowed to sell it, how would you go about using it to generate $2Tn?