My knee-jerk reaction to this was looks like an opportunistic maneuver that Sam is known for and I'm considering canceling my subscriptions and business with OpenAI

But what's the most charitable / objective interpretation of this?

For example - https://x.com/UnderSecretaryF/status/2027594072811098230

Does it suggest that determination of "lawful use" and Dario's concerns falls upon the government, not the AI provider?

Other folks have claimed that Anthropic planned to burn the contentious redlines into Claude's constitution.

Update: I have cancelled my subscriptions until OpenAI clarifies the situation. From an alignment perspective Anthropic's stand seems like the correct long-term approach. And at least some AI researchers appear to agree.

I think Altman probably rationalised it to himself by thinking that if he doesn’t do it, Musk/xAI will, and they give zero fucks about safety. So maybe he told himself that it’s better if OpenAI does it.

Is there a name for this phenomenon? I've taken to calling it "the nihilist's excuse"

I call it race to the moral bottom.

Related: https://www.youtube.com/watch?v=KNqozQ8uaV8

I love it when I can guess what a video is before I click it. Exactly what I hoped it would be.

Similar to false dichotomy. "If we won't do it, <other evil guy> will".

I think it’s called being a “shithead”

I feel that if xAI worked well for the job, it would have been already been selected.

Knowing Sam, that's exactly what happened -- and the echo chamber inside OpenAI wouldnt dare to disagree

yeah or he didn't even

[flagged]

It doesn’t have to be genuine concern, it may just be his internal narrative.

He allegedly raped his own sister. No charges have been brought against him.

As people have repeatedly mentioned, if the War Department was unhappy with Anthropic's terms, they could have refused to sign the contract. But they didn't: they were fine with it for over a year. And if they changed their mind, they could've ended the contract and both sides could've walked away. Anthropic said that would've been fine. But that's not what happened either: they threatened Anthropic with both SCR designation and a DPA takeover if Anthropic didn't agree to unilateral renegotiation of terms that the War Department had already agreed were fine.

It's absurd, and doubly so if OAI's deal includes the same or even similar redlines to what Anthropic had.

it seems like oai deal does include the same red lines, plus some more, and the ability for oai to deploy safety systems to limit the use cases of the model via technical means

this seems strictly better than what anthropic had. anthropic has ruined their relationship with the US govt, giving oai a good negotiating hand

the oai folks are good at making deals, just look at all the complex funding arrangements they have

"OAI wins by playing the government's game" is such a catastrophically bad take.

> anthropic has ruined their relationship with the US govt, giving oai a good negotiating hand

You want to try defending this ridiculous statement a bit more thoroughly?

For a start, the designation by the government of a company as a supply chain risk is not a negotiating tool. It may well be found to be arbitrary and capricious once the courts look at it. Business have rights too.

For another, why do you think OAI was able to make what looks like the same deal? Anthropic was willing to say yes to anything lawful up to their red lines, and it was still a no. Why turn around and give OAI exactly the same thing, unless it's not really what it looks like?

And Altman is always looking for the next buck.

All these supposedly impressive complex funding arrangements have OAI on the hook to firms like Oracle in the hundreds of billions of dollars. No indication at all how this unprofitable business will become a trillion dollar juggernaut.

you're right, supply chain risk is not a negotiating tool. it's spite after talks have ended. it indicates a ruined relationship

the oai deal is similar, but it includes technical safeguards. I think anthropic would have wanted the oai deal

the deal was not only successful because the govt is rebounding. the miltary prefers boundaries to be technical, not contractual

they can try using it, and trust that it will only operate within its designed limits, where the output is reliable

technical barriers to misuse help prevent both accidental and bad-faith misuse. a contract allows both kinds of misuse, enforced only by lawsuits. filing in court to dispute the terms is not always allowed

> supply chain risk is not a negotiating tool. it's spite after talks have ended.

No. It's unlawful abuse of power.

> the miltary prefers boundaries to be technical, not contractual

That's nice for the military. Meanwhile, Anthropic has the right to refuse the use of its IP without being subject to punishment by the government.

You seem to me to be irretrievably "deal-brained", and not at all concerned about the obvious abuse of power by the government here, or the constant display of bad faith by gov't officials.

I am comparing the oai and anthropic deals. most of your comment isn't on that topic

if you believe the government acts in bad faith and is untrustworthy, why trust them to not violate the terms of a contract?

technical safeguards are more secure. the oai deal seems better

Adding more to this, IIRC US Govt threatened to invoke laws which have never been used against an American company in the entire history of US over two conditions that were:

1. No global surveillance on citizens

2. No autonomous killing machines (essentially)

That was it, Anthropic was fine with everything else but they couldn't (in their conscience?) agree to these two things and just these two very reasonable demands caused the govt. to spiral so bad.

Unless you're using an enterprise plan or pay per token, you're not hurting their business at all by cancelling. The consumer plans are heavily subsidised.

Cancelling is the only language these companies understand.

Even Disney couldn't ignore the mass cancellations after dropping Kimmel and Disney+ bearly turns over a profit.

I think their consumer plans are gross margin positive but OpenAI has ~50M paying subscribers driving >$10B in revenue.

Realistically, you need at least ~1M subscribers to cancel to make this painful.

But I suspect this will get drowned out in the face of other news.

This is ultimately about drawing moral lines, isn't it? In that case it wouldn't matter if it hurts their business or not.

It will hurt in future funding rounds if their subscriber metric is stalling or going backwards, regardless of how many of those subscriptions are profitable.

Does it matter? These AI companies need to be able to prove that users are willing to pay at all, even if they're not paying a profitable amount of money. If investors see that they're dumping money into something that's not selling, why continue to do so?

There is value tied to free users, but also, not sure I want my work and data in a product that’s OK with DoD mass surveillance and I’m not sure my customers would want their data pumping through it either.

AI companies seem to be growth companies whose whole point seems to be that they are okay with extreme amounts of losses/lack of profitability so long as they grow a lot.

If you back down from using Chatgpt, you throw a wrench in their growth numbers.

I would consider training data could have important info as well and to be honest, with their circular financing, Nvidia <-> openAI with GPU's being the main cost (and given that OpenAI isn't facing the Ram crisis heck it created the ram crisis by pre-ordering 20%) and recent deals, money isn't an issue to them for some time now. Growth is.

You are also forgetting that OpenAI is planning to add ads in which case you would be the product, its better not to discourage anyone who wishes to cancel perhaps.

Other commentators have made some good points as well and I used to think the same thing as you but I do think that cancelling might make the most sense.

That or if you want to cause maximum damage, trying to burn the most tokens that you physically can asking random things to burn OpenAI's money but remember that the model still takes energy requirements so you'd be wasting energy for something quite pointless.

IMO, it might be better to cancel/not use OpenAI.