X-risk talks heighten fear in everyone, but the reasons why changes like this are made in large technology companies are usually banal. Two alternative explanations include that the person just felt like coding again, or that the projections of exponential progress are falling apart on short timelines. You don't need a bunch of safety people if you're seeing that the LLM feature curve is actually sigmoid, so you're pivoting to products and applications of the existing models which will continue to get better in specialized ways.

> if you're seeing that the LLM feature curve is actually sigmoid

It takes a few months to train advanced models - lets say 4 months. So in the 3 years since these models became a thing, there have been only 9 sequential trainings done. There is no way in a technology as advanced as LLMs, one can be sure in depth 9 that they have hit a plateau of performance. Surely, there are many more ideas to be discovered and tested..

But we can be quite sure about the categories of error that are possible with the technology though, however advanced. Because of that, there is a plateau in the range of useful applications which would need a paradigm shift to overcome. Diminishing returns are on the horizon.

If this is indeed the case, then OAI is lying and Sam Altman in particular is extremely convincing, going to the extent to write an off-putting blog on the topic of achieving AGI. There is no AGI that does not have safety risks, catastrophic or otherwise — that’s exactly why OpenAI was founded in the first placed, in fact: https://web.archive.org/web/20230714043611/https://openai.co...

Re:personal preference, I think the direction is crystal clear. For one thing, it’s my understanding from the article that this guy’s whole team was reorg’d into oblivion

> OAI is lying and Sam Altman in particular is extremely convincing

Sam is an excellent hype-man and is going to play to the strengths of the team and their accomplishments; every new product release is going to be hailed as a breakthrough until people become skeptical if it really is. In the middle of the hype cycle you keep your foot on the gas, because you can make it through a potential AI winter and if not invest in more products.

"AGI" is a shaky definition with moving goalposts. What it means to me might not be what it means to you. How it manifests in product is unlikely to be the science-fiction "one model that does everything". It also doesn't mean that the path to AGI is the path to ASI, or the path to catastrophic risks.

I personally believe that if OpenAI has dismantled the safety org, it is not just because it is in their short-term best interest, but also because they have found that many of the initial concerns around "catastrophic risk" (in the MIRI-type doomer style) from current systems are likely to be unlikely or invalid. As for the smaller safety risks, I'm not sure business has really ever cared about those unless the costs realized outweigh the profit.

The second someone mentions p(doom) their p(sex) zeroes out.

Maybe the guy realized he can get laid if he have a normal job instead of being Daisy Doomer on a payroll.