Dude you have no idea what you are talking about. Just stop talking and read for a few minutes.

OpenAI was founded around the principle avoiding catastrophic risk. 0 people at OpenAI are confused by the goal. It is the governance board's (nominal) primary and only goal. Sidelining that team was an intentional goal by a CEO who wants to pivot into a for-profit company.

I just cannot explain how wrong you are on this. Please have some intellectual humility.

Catastrophic risk was never a serious possibility from anything OpenAI could ever produce, so it's not surprising at all that this is being sidelined as they pivot to a for-profit company

Anyone who joined OpenAI in the past thinking they were seriously going to produce "Catastrophic Risk" from their research into language models is a joke

If you want me to believe "catastrophic risk" is an outcome from your company, I would assume you are developing autonomous weapons, not glorified chatbots

Please be serious