Catastrophic risk was never a serious possibility from anything OpenAI could ever produce, so it's not surprising at all that this is being sidelined as they pivot to a for-profit company

Anyone who joined OpenAI in the past thinking they were seriously going to produce "Catastrophic Risk" from their research into language models is a joke

If you want me to believe "catastrophic risk" is an outcome from your company, I would assume you are developing autonomous weapons, not glorified chatbots

Please be serious