Yeah it makes little sense to me that so many users would experience exactly the same "hallucination" from the same model. Unless it had been made deterministic but even then subtle changes in the wording would trigger different hallucinations, not an identical one.

What if the prompt to the “support assistant” postulates that 1) everything is a user error, 2) if it’s not, it’s a policy violation, 3) if it’s not, it may be our fuckup but we are allowed? This plus the question in the email leading to a particular answer.

Given that LLMs are trained on lots of stuff and not just the policy of this company, it’s not hard to imagine how it could conjure that the policy (plausibly) is “one session per user”, and blame them of violating it.