> They’ve never turned a profit
Now that OpenAI is starting to talk about ads and allowing "erotic" content, I feel more comfortable in my prediction that not only have OpenAI never turned a profit, they never will. They will be consumed by Microsoft or crash the market so hard it's not even funny. The technology will survive, and it will be useful, but OpenAI as a company is done.
It’s amusing how “ads” is seen as an obvious way to make profit for OAI as if Google’s (especially) and Meta’s ads businesses aren’t some of the most sophisticated machines on the planet.
Three generations of Twitter leadership couldn’t make ads on that platform profitable and that exposes far more useful user specific information than ChatGPT.
The hubris is incredible.
There's an absolutely massive disconnect between the technology Sam Altman is presenting in interviews and what is available. Like they're going to create an AI that will design fusion power plants, but right now they can't turn a profit on a technology that millions of people actually use in their day to day work? Can you sell enough ads to carry you through to the fusion capable AI?
More and more OpenAI is drawing parallels to the Danish scandal of IT Factory. Self-proclaimed world leading innovation and technology in the front, financial sorcery in the back.
If they really believe their AI is going to be so great, I guess they can just ask it for a business model when it gets there. So their lack of business model is at least self-consistent.
Right now they may be a bit scarce on business plans and revenue, but I hear they're ushering in an era of "post-scarcity" so that should fix it.
I never saw it, but I heard that was the actual original pitch deck.
I have basically no sympathy toward them, but that’s cool, that’s ballsy.
I should point out that making the pitch deck sound ballsy and cool was that persons goal. So it may have been a made up story.
That is more or less their actual plan. They ignore or want us to ignore that the technology is commoditising so fast that even if it is great, they won't have enough of an advantage for this to provide an edge for more than a matter of months. Just as Microsoft and anyone betting on AI data centre rollouts want us to ignore that the equipment they are rolling out will be functionally inadequate to support new models in far less time than they can make money to offset the cost; the only part of this capital expenditure that will provide lasting value is the building/power/cooling infrastructure, and probably not all of that.
It's a giant money pit, funding a bunch of people who are not long off the crypto grift train if they are at all.
The LLM space is so weird. On the one hand they are spectacularly amazing tools I use daily to help write code, proofread various documents, understand my home assistant configuration, and occasionally reflect on parenting advice. On the other hand, they are the product of massive tech oligarchs, require $$$$ hardware, dumber than a box of rocks at times, and all the stuff you said. Oh yeah, and it definitely has a whiff of crypto grift all over it, but yet unlike crypto it actually is useful and produces things of value.
Like, where is this tech headed? Is it always going to be something that can only be run economically off shared hardware in a data center or is the day I can run a “near frontier model” on consumer grade hardware just around the corner? Is it always going to be trained and refined by massive centralized powers or will we someday soon be able to join a peer 2 peer training clan ran by denizens of 4chan?
This stuff is so overhyped and yet so under hyped at the same time. I can’t really wrap my head around it.
> the day I can run a “near frontier model” on consumer grade hardware just around the corner?
I suspect it is, in fact. But you can also see why a bunch of very very large, overinvested companies would have incentives to try to make sure it isn't. So it's going to be interesting.
[dead]
> funding a bunch of people who are not long off the crypto grift train if they are at all.
Your last statement: are you implying that the AI-bubble is perhaps an attempt at building out more cryptocurrency mining outfits?
No I just think it's the same people (because it is the same people). They jump from hype technology to hype technology, and many of them had an enormous incentive to jump from one GPU-investment-heavy technology with a bad reputation for grift to the new shiny-clean-hope-for-the-future thing that might help them make use of their capital investments.
But specifically at least one of these people — Sam Altman —- is not, IMO, off the crypto grift train, because he's still chairman of Worldcoin, which strikes me (and more importantly strikes regulators around the world [0]) as a pretty shoddy operation (not to mention creepy and weird).
[0] https://en.wikipedia.org/wiki/World_(blockchain)#Legal_and_r...
"Could it be that, once again", Sam Altman is really not that far removed from a grifter?
> It’s amusing how “ads” is seen as an obvious way to make profit for OAI as if Google’s (especially) and Meta’s ads businesses aren’t some of the most sophisticated machines on the planet.
There is much more manipulation potential with LLMs than typical ads. I am worried. It gets more and more difficult to distinct ads and the neutral information.
I think ChatGPT's user info will be far more valuable than twitters or even metas.
Yes, all that user info such as “write me hot waifu erotica”…super valuable.
The court order from the Google search antitrust case gives OAI access to Google Ads for 5 years, if they choose.
Twitter executed incredibly, incredibly badly in the ads space. It came out that a majority of their business was brand advertising which just blows my mind.
They should've made so much money on direct response and yet somehow they messed it all up.
Just like they should have been a few times as large in terms of users, but they executed really, really badly.
So I'm not sure Twitters failures imply anything about OpenAIs prospects.
Twitter at least booked profit which is more than anyone can ever say about OpenAI.
Eventually, yes. But they should've been huge, making substantial fractions (50% )of Meta or Google's revenue. I could never understand what went wrong, tbh.
https://www.bloomberg.com/opinion/newsletters/2025-10-15/ope...
> There’s a famous Sam Altman interview from 2019 in which he explained OpenAI’s revenue model [1] :
>> The honest answer is we have no idea. We have never made any revenue. We have no current plans to make revenue. We have no idea how we may one day generate revenue. We have made a soft promise to investors that once we’ve built this sort of generally intelligent system, basically, we will ask it to figure out a way to generate an investment return for you. [audience laughter] It sounds like an episode of Silicon Valley, it really does, I get it. You can laugh, it’s all right. But it is what I actually believe is going to happen.
> It really is the greatest business plan in the history of capitalism: “We will create God and then ask it for money.” Perfect in its simplicity. As a connoisseur of financial shenanigans, I of course have my own hopes for what the artificial superintelligence will come up with. “I know what every stock price will be tomorrow, so let’s get to day-trading,” would be a good one. “I can tell people what stocks to buy, so let’s get to pump-and-dumping.” “I can destroy any company, so let’s get to short selling.” “I know what every corporate executive is thinking about, so let’s get to insider trading.” That sort of thing. As a matter of science fiction it seems pretty trivial for an omniscient superintelligence to find cool ways make money. “Charge retail customers $20 per month to access the superintelligence,” what, no, obviously that’s not the answer.
> On a pure science-fiction suspension-of-disbelief basis, this business plan is perfect and should not need any updating until they finish building the superintelligent AI. Paying one billion dollars for a 0.2% stake in whatever God comes up with is a good trade. But in the six years since announcing this perfect business plan, Sam Altman has learned [2] that it will cost at least a few trillion dollars to build the super-AI, and it turns out that the supply of science-fiction-suspension-of-disbelief capital is really quite large but not trillions of dollars.
> [1] At about 31:49 in the video. A bit later he approvingly cites the South Park “underpants gnome” meme.
> [2] Perhaps a better word is “decided.” I wrote the other day about Altman’s above-consensus capital spending plans: “'The deals have surprised some competitors who have far more modest projections of their computing costs,’ because he is better at this than they are. If you go around saying ‘I am going to build transformative AI efficiently,’ how transformative can it be? If you go around saying ‘I am going to need 1,000 new nuclear plants to build my product,’ everyone knows that it will be a big deal.”
The switch from "Ai might kill us" to "you'll goon to Ai" was kinda funny, not gonna lie
This may be the funniest comment I've ever read, considering the circumstances.
Just based on the number of ads I get for thinly veiled erotic chatbots, and the success sites like character.ai have with pretty bad LLMs, there has to be a lot of money in erotic LLM content. OpenAI turning to that market is a sign they are running out of easy investor money, but if they can survive the associated controversy without lobotomizing the models this sounds like a method to turn the entire company profitable over night. They might have to raise prices or abandon the flatrate model to deal with heavy users, but locking adult content behind separate plans might even increase acceptance
Not sure if increased availability of LLM porn or the gradual erosion of LLMs with ads and sponsored content would be the greater evil on a societal level. Neither is particularly great. But they will certainly drive shareholder value
This. If you really think you're a couple of years away from building Digital God, and today have virtually unlimited access to capital, you are not going to spend time shipping a sexy mode.
Why not do both if you have unlimited capital today?
Do they also have infinite labor?
Isn’t the whole point of AI to replace labor? If they really want to put their money where their mouth is, they’d be casting off staff at a prodigious rate.
Because you don't have unlimited attention
But people have attention now, and it's surely quicker to make sexy mode than God.
Porn is enormously profitable. This might just be the saving grace for AI. Historically, porn has been a pioneer in new tech industries (home video, online commerce, video and streaming). This time they aren't first to the game but don't underestimate the industry.
It's not as profitable as the original promise of AI to cut labour costs on a massive scale. Stockholders won't be happy.
The saving grace here might be that you can hide your porn subscription in your monthly OpenAI subscription. Only question then: Will VISA and Mastercard cut off OpenAI for peddling porn?
A quick search seems to indicate that the porn industry has a $100B in revenue per year, 20% of which is from subscriptions. If OpenAI consumed the entire global market for subscriptions, $20B, would that cover their yearly operational cost?
Assuming it deals somehow with the folks who use it for revenge porn, deep fakes of non-consenting parties, and depictions of minors…has it said anything about doing that? Has anyone? Does anyone know the legal implications of generating mass quantities of sexual exploitation, or is this just another thing with AI that society will have to simply tolerate in the name of “progress”?
Do you predict the same for Anthropic? Hopefully they will stick around.
The problem, I think, is that IF OpenAI fail, they'll take with them a lot of other AI companies, simply because funding will be redirected away from the field entirely. If you're profitable, then you're probably going to be fine. If anything your operating costs will go down as there is less competition for staff and compute.
If we go purely by economics, then Anthropic belongs to the same category of LLM corporations - ones which have only LLM as a product. As opposed to the likes of Google, Microsoft, even Facebook. Sure, these LLM-first corporations have a very tiny lead in both technology and (LLM) brand recognition, but it is shrinking fast. I suspect that only companies which will bundle LLMs with other big products (and do it cheaply) will survive in the long run.
> OpenAI is starting to talk about ads and allowing "erotic" content
I'm curious what you're referring to here. Did Sam Altman tweet something about this?