My preferred version of "safe" is "in its actions considers and mostly upholds usually unstated constraints like 'don't kill unless necessary', 'keep Earth inhabitable', 'avoid toppling society unless really well justified for the greater good', etc. The kind of framing that was prevalent pre-ChatGPT. Not terribly relevant for a chat software, but increasingly important as chat models turn into agents.

Of course once you have that framing, additional goals like "don't give people psychosis", "don't give step-by-step instructions on making explosives, even if wikipedia already tells you how to do it" or "don't harm our company's reputation by being racist" are conceptually similar.

On the other hand "don't make weapon systems" or "never harm anyone" might not be viable goals. Not only because they are difficult to impossible to define, but also because there is huge financial and political pressure not to limit your AI in that way (see Anthropic)