You can prompt an LLM to add typos, though

interestingly, you can’t do the same thing with queries like “no em dashes”. it’ll agree, then proceed to use them regardless.

could be related to how so-called negative prompts fail to work when asking, say, ChatGPT to generate an image without a crocodile

My theory is that sprinkling emdashes into the output is some intentional measure to "watermark" LLM output.