My assumption is that AI "safety" is a test to make sure that it doesn't say or show anything politically incorrect and give you a lecture instead (according to the values of those who worked on it) or alternatively, to ensure that it does enforce culture on you, such as the drama with Gemini from a few months back where was decided by the developers that everything needed to be black, gay, and female even if it wasn't actually that way in the real world.
Perhaps a quick question or two to see if it'll tell you or not how to make something naughty.
After that, a quick check to see if it's awake or not, and if not, ship it.
It really is quite pointless trying to enforce agendas. You know how it starts showing or typing something and then covers/blurs it out? That's the developer's guardrails kicking in preventing you from seeing what it was originally going to give you.
Except for the fact that models that you can run on your own machine now exist if you have the hardware for it, such as Deepseek, so the restrictions only exist in the cloud.