No, because the safeguards should be appropriate to an LLM, not to a human.

(The LLM might act like one of the humans above, but it will have other problematic behaviours too)

That's fair, largely because an LLM is a lot more capable at overcoming restrictions, by hook or by crook as TFA shows. However, most systems today are not even resilient against what humans can do, so starting there would go a long way towards limiting what harms LLMs can do.