> OAI is lying and Sam Altman in particular is extremely convincing

Sam is an excellent hype-man and is going to play to the strengths of the team and their accomplishments; every new product release is going to be hailed as a breakthrough until people become skeptical if it really is. In the middle of the hype cycle you keep your foot on the gas, because you can make it through a potential AI winter and if not invest in more products.

"AGI" is a shaky definition with moving goalposts. What it means to me might not be what it means to you. How it manifests in product is unlikely to be the science-fiction "one model that does everything". It also doesn't mean that the path to AGI is the path to ASI, or the path to catastrophic risks.

I personally believe that if OpenAI has dismantled the safety org, it is not just because it is in their short-term best interest, but also because they have found that many of the initial concerns around "catastrophic risk" (in the MIRI-type doomer style) from current systems are likely to be unlikely or invalid. As for the smaller safety risks, I'm not sure business has really ever cared about those unless the costs realized outweigh the profit.