I don’t mean that in a small way (ie sometimes they don’t follow rules), I mean it in the more important sense that they don’t have a sense of right or wrong and the instructions we give them are just more context, they are not hard constraints as most humans would see them.

This leads to endless frustration as people try to use text to constrain what LLMs generate, it’s fundamentally not going to work because of how they function.