If this is indeed the case, then OAI is lying and Sam Altman in particular is extremely convincing, going to the extent to write an off-putting blog on the topic of achieving AGI. There is no AGI that does not have safety risks, catastrophic or otherwise — that’s exactly why OpenAI was founded in the first placed, in fact: https://web.archive.org/web/20230714043611/https://openai.co...
Re:personal preference, I think the direction is crystal clear. For one thing, it’s my understanding from the article that this guy’s whole team was reorg’d into oblivion
> OAI is lying and Sam Altman in particular is extremely convincing
Sam is an excellent hype-man and is going to play to the strengths of the team and their accomplishments; every new product release is going to be hailed as a breakthrough until people become skeptical if it really is. In the middle of the hype cycle you keep your foot on the gas, because you can make it through a potential AI winter and if not invest in more products.
"AGI" is a shaky definition with moving goalposts. What it means to me might not be what it means to you. How it manifests in product is unlikely to be the science-fiction "one model that does everything". It also doesn't mean that the path to AGI is the path to ASI, or the path to catastrophic risks.
I personally believe that if OpenAI has dismantled the safety org, it is not just because it is in their short-term best interest, but also because they have found that many of the initial concerns around "catastrophic risk" (in the MIRI-type doomer style) from current systems are likely to be unlikely or invalid. As for the smaller safety risks, I'm not sure business has really ever cared about those unless the costs realized outweigh the profit.