It is disconcerting how Altman has used "AI safety" as a marketing tool. The more people imagine the universe turned into paperclips, the more they invest. Obviously Altman doesn't care about safety (I don't either; I'm not an AI-doomer). But he truly does come across as someone incapable of telling the truth. Are you even a liar if honesty is not in the set of possible outcomes?

Still, there's something oddly reassuring here: if you believe "AI safety" is essentially a buzzword (as I do), then this whole affair comes down to people squabbling over money and power. There really is nothing new under the sun.

"This thing might destroy humanity - we need to build it ASAP" does not really make sense. But it enthrall[s/ed] many smart researchers who would normally demand specific, testable claims and logical responses to those claims.

We have drastically escalated what claims are necessary to motivate startup employees. It used to be that you could merely dangle an interesting problem in front of a researcher. Then you could earn millions, then billions. TAMs in the trillions. AGI will destroy humanity unless you, personally step in. Elon is talking about Kardashev III civilizations. The universe cannot bear the hype being loaded upon it.

I agree with you completely, but the way I see it Anthropic are x100 worse when it comes to amplifying this doomer bs for marketing. It’s their whole shtick.