"AI Safety" got suborned, then dropped when it wasn't needed anymore.
Every misalignment/AI safety paper is basically a metaphor for how corporate values can misalign with actual human values under capitalism.
The first thing that happened when "AI Safety" became useful to corporate interests, is that the "goal" of it instantly became "profitability" not safety. "AI Safety" became about liability minimization, not actual safety for humanity. (Look! the system is now misaligned with the goal, wonder how that happened!?)
AI Safety concerns were instantly proven true, it happened, and now we live in the world where it is too late to prevent the superintelligences that we call "corporations" from paper-clipping us to death in pursuit of profit.