If they've let their AI write the policy, and then they repeat that as policy, how exactly is this an "LLM hallucination" and not a real policy?

It's both, isn't it? If the AI writes the policy and is also responsible for enforcing it (by handling tickets and acting as a gatekeeper for which issues are escalated to humans who can do something about them), then the hallucination becomes real.

It's the same thing. Whether it was hallucinated upstream or in situ, the point is that it's not a real policy that the business adheres to, just something the LLM spat out.

Sure, it’s a real policy. It came from their website, from the official means of support.

Is it? Can you link me to that?