A safety team at the hammer company cannot prevent me from using it to bang your head.
You can align to the user wants and so you are a hammer. This is alignment>safety.
Or you take a safety first approach where the AI decides what safe is and does its own bidding instead of yours. This is safety>alignment.
I prefer hammers to be honest. Mostly because humans can be prosecuted, AIs can't. So if the human wants to commit crime with the AI it should be able to, because the opposite turns to dystopia fast.