Less than two years ago, Sam Altman said

> I kind of think of ads as a last resort for us for a business model. I would do it if it meant that was the only way to get everybody in the world access to great services, but if we can find something that doesn't do that, I'd prefer that.

So, is this OpenAI announcing they're strapped for cash?

No, I suspect that "I kind of think of ads as a last resort" was doublespeak for "ads are coming eventually".

I would tend to think of someone like him as a person who uses words to achieve a specific goal, rather than someone who speaks whatever is truly on their mind. Whether those words are lies or truth or somewhere in between is irrelevant; what matters to them is the outcome.

It's likely a waste of time trying to unpick the meaning, because there is none. "But Sam Altman said..." to me has about as much value as "ChatGPT told me...".

This is something I’ve long believed to be true and important to understand, yet rarely see anybody else argue, so it makes me happy to read. I think of it like the kissing noise we make to make a pet come. You could call it the truth or a lie depending on what the pet is expecting and whether you then do it, but both judgements miss what actually happened: it didn’t even occur to us to think about whether it’s “true”, we just made that noise because we expected it to produce the desired behavior. CEOs and politicians are usually like this with humans.

The kissing noise analogy is spot on! Made me smile

There is a thin layer of high functioning sociopath at the top of all human social structures. Never trust anyone who wants to lead at that level. You have more in common with a colossal squid at the bottom of the deepest trench than you do with that kind of human.

I think doublespeak is more along the lines of calling ads a "product recommendation strategy". This was either a) a plain lie b) they're actually at their last resort.

> This was either a) a plain lie b) they're actually at their last resort.

That's thinking like a normal honest human :-) My point is that it was likely not a statement about reality (true or false) at all, but rather a phrase designed to elicit some response in the listener, such as the idea: 'Sam Altman isn't the kind of CEO who would put ads in his products unless he really had to'.

He's not describing how things are, but how he wants you to think about them.

> He's not describing how things are, but how he wants you to think about them.

That is what a lie is. The fact that some people think he exists in a different plane of existence from normal humans does not change the meaning of “lie”.

I mean, I get that you are trying to make a subtle point but this:

> He's not describing how things are, but how he wants you to think about them.

is just a fancy way to describe lies. I'm not even sure if it specifies some interesting subset of lies, I think it's just the plain definition.

Oh I think there's a big difference. One is clever, manipulative, meant to control or coerce, possibly to facilitate long term strategic goals. The other could be a simple immediate denial of fact to avoid blame. I think the personality and capabilities of the person in the former case is more concerning.

I don't want to split hairs but I posit there is a difference because 'how I want you to think about things' could be a mixture of lies, truths, and half-truths.

'Lying', to me, implies some relationship with reality - I'm lying if I know there's no orange in my bag but I tell you that there is. What we're talking about is someone who might not know or care whether the orange or even the bag exists at all, and is just saying things to get some specific response out of the audience. The deception or not is irrelevant really.

I don't think you're making a useful point about the situation.

In the case of the orange in the bag, both Altman and his interlocutor can see the bag and the truth can be exposed by rummaging.

In the case of ads in the oAI chat feed, at the time Altman made the comment he was probably planning to puts ads in the feed. But there might not even be emails about this, just conversation. And the engineers might not solve the "how" for a while... so there's nothing to rummage for.

However, in both cases Altman wants you to think something other than what's on his mind. There's an orange in his bag, but he wants you to think there is not. There's going to be ads because he owes the investors a tonne of money but he wants you to think it wont happen, or wont happen soon, or will be "nice" ads...

The distinction is in the nature of the underlying truth, not in Altmans words or actions in the moment. In the moment, in both cases, he's lying.

Feels like the harm of "at last resort" lie is more harmful than the benefit of "is being honest" for him.

I agree with your point. Mine was about the word doublespeak for this, which I think it's not - it's a lie in effect, but I think it is something like what you say, for which I don't know a term of. A bunch of sentences that are said in a complete disregard for truths and untruths; instead they are supposed to get you to believe something.

This also kinda fits the profile of Altman that I'm getting from what I have seen - admittedly without looking in-depth. A person who is on surface a pathological liar, but in fact in a closer look he just says things. They just _happen_ to be complete lies, because that's what you need to do to achieve the goal in the set of circumstances. It's just that because it's as morally objectionable as outright lying, some people would pause and think before doing it, while he seems to just have no qualms at all.

Ah, got it. Maybe 'gaslighting' cuts more to the point?

The word I have heard is "bullshitting". Lies at least orient themselves with regard to the truth, bullshit floats free

I think gaslighting is more sinister and deliberate, but it's in a similar spectrum of manipulative behavior. Perhaps, as his statements are less filled with the style of Musk's bravado on topic of FSD, and they feel overall mid, I can propose MID: Manipulative-Impulsive Disorder?

That's how I shall think of it from now on ^^

Exactly this. Words are cheap these days, people do say various things to further their goals. Days where leaders stood by their words as sort of moral testament of their character are gone, probably for good.

As we see many people will do or say just about anything to get more money, prestige or power.

For now but not for good. Neglecting moral character works as a shortcut for maybe a generation or two. But that path leads to destruction and decay eventually. It can't last.

Thank you. Agreed. There are some practical limits to that path. It works in the current ecosystem partially because the resulting degradation is slow, but it is built upon societal trust. Once it is gone, it will be rather painful to restore. A new new deal will be needed, so to speak ( political evocation is accidental, but it is too late for me to coherently rewrite ).

Hard men create good times. Good times create soft men. Soft men create hard times.

So what is the best system to get people to be invested in the general welfare of all people? What are we supposed to do?

Your question seems to imply that people have to be corralled towards a specific action, which to me comes across as rather cynical.

Why is it not possible to lay out your arguments honestly and let people decide on the merits?

I think, part of the issue is that, as a mass of humans, we tend to be rather dumb. And they certainly don't decide on merits, in aggregate. It is somewhat questionable if they decide on merits even as individuals ( unless we expand the definition somewhat ). But it is possible I got too cynical.

Some problems don't have solutions.

This one does though. These issues are solely created by humans, so of course humans can solve them, that's not even a question. People who care need to keep speaking up and reaching out to each other, get together; and by doing so expose the people who don't care, or actively are against the general welfare of humans, like rocks on the beach when the tide recedes.

It takes so much work, so much criminal energy, so much money and campaigns, to divide people. Whereas the opposite, people getting to know each other and working together, happens "by itself" all the time, for the most banal of reasons. Just give them some time and space together; no lobbying required, no bribes or blackmail, no psy-ops; just our innate desire to live and let live.

Humans who prey on humans are sick, it's as simple as that. Humans who don't want to stand up to humans who prey on humans may not be sick, but they're not our best, that's for sure, and they must not be our gatekeepers or our compass.

People getting to know each and working together to genocide another group of people that's slightly different from them does indeed have many precedents in history.

The problem with your idea is that you see "humans" as some kind of abstract unified whole. People care about their peers far more than they do about "humans" in the abstract. When you're a powerful venture capitalist, these peers are other venture capitalists for example. Some call this "class consciousness".

> No, I suspect that "I kind of think of ads as a last resort" was doublespeak for "ads are coming eventually".

I don't think so. Resorting to ads is an obvious step but one that profoundly degrades the credibility of the whole service. It's a pyrrhic monetization strategy, and one that's pulled when all other options failed. It's a kin to scraping the bottom of the barrel to extract the remaining bits of value left.

The reason why the statement was "I kind of think of ads as a last resort" is clearly because they were a last resort move. And here they are.

> "But Sam Altman said..." to me has about as much value as "ChatGPT told me...".

Or Trump. Same profile.

There is something to be admired in this kind of people. They are not bound by their own words. It simply doesn't matter to them what they said a month ago, or a minute ago.

Their words are attached to the instant they are pronounced; they don't concern the future, or the past. They die immediately after they have been said. It's amazing to watch.

For certain values of 'admired'... It is impressive, in a diabolical way, and seems to be very effective.

Its might makes right.. as a individual.. as a boolean bully..

Feels to me like idealism crossing into realism. OpenAI could be the next Google, or the next Facebook, or the next… I don't know, Netflix?

All those companies (and many other large tech companies) have discovered the same arbitrage that older media companies discovered decades ago, which is that we, on the average, are much more willing to pay with attention than with money, even where money would have been the better choice.

Advertising continues to be one of the most powerful business models ever invented, and I don't think that's changing any time soon.

Altman is an idealist?

I read this as: I know ads are likely if not inevitable but I can’t say that while I’m trying to gain users and inspire trust but I’ll start to float even in this non-denial the justification for the thing I’m ultimately going to do.

Altman wanting to look idealistic and inspiring.

See it as a brand image advertising campaign of the time.

The ideal is "It would be ideal if everyone on the planet voluntarily paid me $20/month"

Most billionaires are idealists when it comes to this one particular ideal.

The opposite of an idealist is a materialist. The opposite of an ideologue is a pragmatist.

In this sense I think Altman is an idealist, he concerns himself primarily with ideas, not so much with material reality.

So realistically no agi

By all accounts, we're 2 years away from AGI, every year.

Its like fussion power, except there we half the funding every year instead of doubling it

Fusion power is proven to be possible.

AGI is not.

There is (eventually) no more profit to be made on energy when energy becomes virtually limitless.

There is (still) a lot of profit to be made on half-baked semi-AGI prospects.

It's not like the machines will ever be free, just the fuel. And it's not like the price of energy will go to zero, just be cheaper. To drive down the price of energy you first need to be taking a large slice of a trillion dollar pie.

If fuel or any other form of energy becomes virtually limitless and free, any form of matter will eventually also be kinda limitless and free. Could take longer than humanity will ever last though.

In the 'short' and current term there is still lots of money to be made in fuel indeed, but advancements in fossil free energy could make a real shift.

I think your characterisation of this as discovery is a little naive. What you are describing is a part of enshittification and it happens too often to be an accident. Revenue maximisation is always the end goal. Also it's not that the user is willing to pay with attention. There is no alternative. In fact it's the very opposite, more than once now a product has basically been pitched as "pay us to avoid ads" and then once it dominated the market they introduce ads. That's users trying to choose to pay with money over attention and ultimately being unable to do so.

The uncomfortable part is that "ads as a last resort" sounds very different once the product becomes one of the main places people ask for advice

Well - I think the writing was on the wall when they announced they were going to be for-profit. Slippery slope and all that, but I’m sure some of this is because they’ve been giving out free tokens for years.

Even as a not for profit they would need cashflow.

The ads are for the free tier and new $8 ad-supported plan.

The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans. This looks like a play to justify the existence of the previously money-losing free tier as they go into an IPO. Throw some ads in there to make it closer to a neutral on the balance sheet.

The key part of that quote was "everybody in the world". The ads are their way of sustaining the low end of the access.

The revenue from highly targeted ads, using even better profiles than Google Search or even Facebook could build, may be non-negligible.

Commercial ads could be a smaller revenue source than political ads.

Political ads would destroy the value proposition. That would be an incredibly short-sighted move.

Chats with LLMs are often intensely personal, you don't want to create the perception that politicians have any level of access to it.

"That would be an incredibly short-sighted move."

Yes, but it has not stopped several companies to implement stuff like this to get more money.

> The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible

So why chase this negligible revenue?

>The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans.

Unless they botch the implementation, it's not going to be negligible with ~800M+ free subscribers.

The real question is what do you get out of advertising to people who don't have any money? Kinda squeezing blood from a stone.

You'd be better off saying you use those people to A/B test changes and filling idle GPU batches while giving paying customers a more consistent experience.

There's lots of people who are willing to spend a lot of money on 'real things' while not spending anything on bytes. It's the tech companies which have created this expectation of free services. Many non-tech people I know are relatively wealthy and think likes this.

> The real question is what do you get out of advertising to people who don't have any money?

Psychographic data. What they learn from these folks will create the most powerful manipulation technology yet.

A bunch of people pay to remove ads, and a bunch of people that are happy to give businesses their attention (view ads) I'm exchange for services... I.e. Gmail, YouTube, but don't feel they use enough / are annoyed enough to warrant $15-25/month.

Some brands are okay with impressions.. you can build trust in your product be advertising it for weeks/months and when the user does make a purchase that brand is on the mind.

That's how it begins.

> The ads are for the free tier and new $8 ad-supported plan.

Dang.

> The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans. This looks like a play to justify the existence of the previously money-losing free tier as they go into an IPO. Throw some ads in there to make it closer to a neutral on the balance sheet.

Yeah, I guess this time around Sam Altman can't be lying about how many Monthly Active Users he has.

That's not how I read that sentence at all. Maybe I've just been speaking VC for too long.

What he meant was: "I'm going to get everybody in the world access to great services. Doing so means monetizing somehow. Ads will be the last way I chose to do that, but I will if it's the only way I can figure out how to achieve that goal."

You've said the same thing.

> Ads will be the last way I chose to do that

The implication is that they've exhausted all other options.

I haven't said the same thing as the parent commenter:

> So, is this OpenAI announcing they're strapped for cash?

It by no means conveys that. It means they haven't figured out another way to monetize something they want to do; it indicates nothing about their financial situation. It means they don't want to sell something at a loss perpetually while they figure it out.

Being forced into something you don't want to do, to stop selling at a loss... I would categorize that as some level of strapped for cash.

You realize we're talking about a product that is currently free, right? Neither of us have any insight into the margins of their paid offering.

All this means is: we have a free offering that we can't figure out another way to monetize right now.

We can each draw our own conclusions about what that might mean for the state of their business, but all of the other inferences (ha) in this thread are conjecture.

> You realize we're talking about a product that is currently free, right? Neither of us have any insight into the margins of their paid offering.

I don't see how that changes the analysis.

> All this means is: we have a free offering that we can't figure out another way to monetize right now.

And they're doing something they significantly don't want to do to monetize it.

Either they fully changed their mind, or the money is somewhat important, or they're utterly crazy.

The first is unlikely, the last is unlikely, the middle one is enough for a casual "strapped for cash".

It's a very minor conjecture. Actions aren't taken for no reason.

If we can agree that "strapped for cash" also includes "not stupid with cash", I think we're on the same page here. :)

(For all I know they are strapped for cash, to be clear; I just don't think the quote says that.)

Going with a last resort implies more than "not stupid".

Okay, fine: "conservative with cash" or even "tight with spending"?

(I'm not sure how much deeper HN threads can nest.)

"Tight" gets pretty close to "strapped", especially when it comes to making a change.

(They can go super deep if people are committed.)

I concede.

(Haha, ok, let's call a truce here before we break HN! Appreciate the conversation.)

Presumably the way to monetize a free tier is by converting them into paying users.

“Upgrade for an Ad free experience” will certainly be a part of it.

What other options are there?

I also remember him saying that on ig lex friedman podcast. In my opinion, they will only try this on a handful of users and see if it works out or not, just like Anthropic removed Claude code from the pro plan for a very small percentage of users just for testing purposes. It will all boil down to how people respond to the ads rollout.

For somebody so smart, surrounding by people so brilliant, in the very heart of the Silicon Valley, and somehow not learning from the 1 startup that become one of the largest corporations even, namely Google, is a pretty dumb move.

Context : Brin/Page said the same, they didn't like nor want ads, only if it was the last resort. Well, guess which World we all live in now.

BREAKING : Man changes mind.

He did not. He was/is a liar.

Who can resist the temptation of profit? One always has to make money

If I say “Doing X is a last resort” and then I’m caught doing X, it should raise some eyebrows about my level of desperation.

It’s not that OpenAI is trying to raise revenues that bothers me, it’s how they are doing things that said was desperate just a couple years ago.

> Desperation

You’re right on the core of the issue. I think there has been some temporal stripping of context: that ‘last resort’ needs to be considered against their alternatives.

OpenAI isn’t a business scaling a popular website to profitability, that’s Reddit or Slashdot. OpenAI was promising revolutionary product technology that was breathlessly close to AGI and would eliminate positions and automate coding and, and, and…

Having your next-gen AGI do-it-all platform mature into hoping to recreate the business model of Reddit should raise eyebrows, and let everyone know about the state of The Emperors wardrobe.

They could be building an Office killer and consumer oriented OS’s & ecosystem for near infinite money… they are running ads. Ads for porn and dick pills? Not yet, that’d be another last resort.

Tons of people can resist the temptation, but they aren't likely to be the sort of person that gets put in a role like where Altman is

Charitably, it seems that we have yet to find, as a species/society, anything more effectively profitable than ads. I cannot blame those who come to this conclusion so long as no more powerful and proven motivator yet exists. I hate it, but I understand.

I think ads are just overpriced and companies do not really get that return. But marketing people have no metrics to show that.

"last resort" doing some heavy lifting in that quote.

Oh no ... Sweet summer child. Whatever the revenue is, whatever profit there is, whatever cash buffer any corporate has, you can be sure of one thing: they need this to go up and to the right...

It became almost a perfect science to optimize your behavior: this is why you end up, bit by bit with enshitiffied products all around you where basically the pain of using that product is just at the threshold of you actually bashing it against the wall.

ChatGPT is just one of them, like Google search, your TV serving ads or ...

Sam Altman is the guy fired for lying. Why believe what he claims?

Or, Sam did not speak the truth back then, and always had ads in his mind. I think that was the strategy from the get go.

more like "Sam Altman said"

[dead]

I think you're missing that Sam Altman is very smart. If OpenAI really were on the verge of becoming massively profitable due to their next-gen AI, he would not want that information leaking. If Sam Altman acts differently in the world where profits are on the horizon, that information leaks prematurely. Thus, he has to act as if OpenAI is strapped for cash, whether or not it is.

The keyword is "glamorization": https://www.lesswrong.com/w/consistent-glomarization

This reads similar to the Trump 4D chess excuse. It seems unlikely that this is a ruse, and much more likely that OpenAI's market cap is supported by doing "all the things" to exploit the huge monthly average user base that OpenAI has accumulated.

I would just assume that they were still spending VC money to lock in users if nothing happened. I would not assume "AI is about to make money obsolete"