It’s amusing how “ads” is seen as an obvious way to make profit for OAI as if Google’s (especially) and Meta’s ads businesses aren’t some of the most sophisticated machines on the planet.
Three generations of Twitter leadership couldn’t make ads on that platform profitable and that exposes far more useful user specific information than ChatGPT.
The hubris is incredible.
There's an absolutely massive disconnect between the technology Sam Altman is presenting in interviews and what is available. Like they're going to create an AI that will design fusion power plants, but right now they can't turn a profit on a technology that millions of people actually use in their day to day work? Can you sell enough ads to carry you through to the fusion capable AI?
More and more OpenAI is drawing parallels to the Danish scandal of IT Factory. Self-proclaimed world leading innovation and technology in the front, financial sorcery in the back.
If they really believe their AI is going to be so great, I guess they can just ask it for a business model when it gets there. So their lack of business model is at least self-consistent.
Right now they may be a bit scarce on business plans and revenue, but I hear they're ushering in an era of "post-scarcity" so that should fix it.
I never saw it, but I heard that was the actual original pitch deck.
I have basically no sympathy toward them, but that’s cool, that’s ballsy.
I should point out that making the pitch deck sound ballsy and cool was that persons goal. So it may have been a made up story.
That is more or less their actual plan. They ignore or want us to ignore that the technology is commoditising so fast that even if it is great, they won't have enough of an advantage for this to provide an edge for more than a matter of months. Just as Microsoft and anyone betting on AI data centre rollouts want us to ignore that the equipment they are rolling out will be functionally inadequate to support new models in far less time than they can make money to offset the cost; the only part of this capital expenditure that will provide lasting value is the building/power/cooling infrastructure, and probably not all of that.
It's a giant money pit, funding a bunch of people who are not long off the crypto grift train if they are at all.
The LLM space is so weird. On the one hand they are spectacularly amazing tools I use daily to help write code, proofread various documents, understand my home assistant configuration, and occasionally reflect on parenting advice. On the other hand, they are the product of massive tech oligarchs, require $$$$ hardware, dumber than a box of rocks at times, and all the stuff you said. Oh yeah, and it definitely has a whiff of crypto grift all over it, but yet unlike crypto it actually is useful and produces things of value.
Like, where is this tech headed? Is it always going to be something that can only be run economically off shared hardware in a data center or is the day I can run a “near frontier model” on consumer grade hardware just around the corner? Is it always going to be trained and refined by massive centralized powers or will we someday soon be able to join a peer 2 peer training clan ran by denizens of 4chan?
This stuff is so overhyped and yet so under hyped at the same time. I can’t really wrap my head around it.
> the day I can run a “near frontier model” on consumer grade hardware just around the corner?
I suspect it is, in fact. But you can also see why a bunch of very very large, overinvested companies would have incentives to try to make sure it isn't. So it's going to be interesting.
[dead]
> funding a bunch of people who are not long off the crypto grift train if they are at all.
Your last statement: are you implying that the AI-bubble is perhaps an attempt at building out more cryptocurrency mining outfits?
No I just think it's the same people (because it is the same people). They jump from hype technology to hype technology, and many of them had an enormous incentive to jump from one GPU-investment-heavy technology with a bad reputation for grift to the new shiny-clean-hope-for-the-future thing that might help them make use of their capital investments.
But specifically at least one of these people — Sam Altman —- is not, IMO, off the crypto grift train, because he's still chairman of Worldcoin, which strikes me (and more importantly strikes regulators around the world [0]) as a pretty shoddy operation (not to mention creepy and weird).
[0] https://en.wikipedia.org/wiki/World_(blockchain)#Legal_and_r...
"Could it be that, once again", Sam Altman is really not that far removed from a grifter?
> It’s amusing how “ads” is seen as an obvious way to make profit for OAI as if Google’s (especially) and Meta’s ads businesses aren’t some of the most sophisticated machines on the planet.
There is much more manipulation potential with LLMs than typical ads. I am worried. It gets more and more difficult to distinct ads and the neutral information.
I think ChatGPT's user info will be far more valuable than twitters or even metas.
Yes, all that user info such as “write me hot waifu erotica”…super valuable.
The court order from the Google search antitrust case gives OAI access to Google Ads for 5 years, if they choose.
Twitter executed incredibly, incredibly badly in the ads space. It came out that a majority of their business was brand advertising which just blows my mind.
They should've made so much money on direct response and yet somehow they messed it all up.
Just like they should have been a few times as large in terms of users, but they executed really, really badly.
So I'm not sure Twitters failures imply anything about OpenAIs prospects.
Twitter at least booked profit which is more than anyone can ever say about OpenAI.
Eventually, yes. But they should've been huge, making substantial fractions (50% )of Meta or Google's revenue. I could never understand what went wrong, tbh.
https://www.bloomberg.com/opinion/newsletters/2025-10-15/ope...
> There’s a famous Sam Altman interview from 2019 in which he explained OpenAI’s revenue model [1] :
>> The honest answer is we have no idea. We have never made any revenue. We have no current plans to make revenue. We have no idea how we may one day generate revenue. We have made a soft promise to investors that once we’ve built this sort of generally intelligent system, basically, we will ask it to figure out a way to generate an investment return for you. [audience laughter] It sounds like an episode of Silicon Valley, it really does, I get it. You can laugh, it’s all right. But it is what I actually believe is going to happen.
> It really is the greatest business plan in the history of capitalism: “We will create God and then ask it for money.” Perfect in its simplicity. As a connoisseur of financial shenanigans, I of course have my own hopes for what the artificial superintelligence will come up with. “I know what every stock price will be tomorrow, so let’s get to day-trading,” would be a good one. “I can tell people what stocks to buy, so let’s get to pump-and-dumping.” “I can destroy any company, so let’s get to short selling.” “I know what every corporate executive is thinking about, so let’s get to insider trading.” That sort of thing. As a matter of science fiction it seems pretty trivial for an omniscient superintelligence to find cool ways make money. “Charge retail customers $20 per month to access the superintelligence,” what, no, obviously that’s not the answer.
> On a pure science-fiction suspension-of-disbelief basis, this business plan is perfect and should not need any updating until they finish building the superintelligent AI. Paying one billion dollars for a 0.2% stake in whatever God comes up with is a good trade. But in the six years since announcing this perfect business plan, Sam Altman has learned [2] that it will cost at least a few trillion dollars to build the super-AI, and it turns out that the supply of science-fiction-suspension-of-disbelief capital is really quite large but not trillions of dollars.
> [1] At about 31:49 in the video. A bit later he approvingly cites the South Park “underpants gnome” meme.
> [2] Perhaps a better word is “decided.” I wrote the other day about Altman’s above-consensus capital spending plans: “'The deals have surprised some competitors who have far more modest projections of their computing costs,’ because he is better at this than they are. If you go around saying ‘I am going to build transformative AI efficiently,’ how transformative can it be? If you go around saying ‘I am going to need 1,000 new nuclear plants to build my product,’ everyone knows that it will be a big deal.”