For me, the attempted productization of Sora was conclusive proof that 1) OAI was overcapitalized and desperate for revenue 2) safety didn't matter to them much 3) improving the world didn't matter much either.

At one point you mentioned an interaction with OpenAI staff where you were looking to interview AI Safety researchers. You were rebuffed b/c "existential safety isn't a thing". Does this mean that you could find no evidence of a AI Safety team at OAI after Jan Leike left? If you look at job postings it does seem like they have significant safety staff...

Interestingly we are still experiencing the technological momentum inspired and created by what OpenAI used to be. AI for humanity.

Given the initiative started circa 2017, much of the goods remain. It's a hijack of creative geniuses who got together, which is now turning into cow milking tech.

[deleted]