The fact is that a small llm can come up with scenarios that would be kind of like it, and you are dismissing that because you want humans to be the only things capable of originality. In fact you lifted an entire aspect of the campaign from a classic novel, and the mirrors being magic concept is pretty much ingrained in cultural tropes, so I'm not sure why you think that system which has as its basis the entirety of human written output which was designed to predict more output couldn't come up with it.

" and you are dismissing that because you want humans to be the only things capable of originality."

I have not seen convincing cases of LLM's being capable of orginality. But they do have a lot of originality in their training data.

But then, human artists have a lot of training and influences too, so it's not so easy for humans to be truly original either (if that is a thing).

And then, if you are too original, no one will like it either ..

I don't think there is something "truly original" but in this concrete case, I suspect something very similar was in the training data.

I'm not dismissing LLMs, but there's a reason that the current crop of LLMs aren't writing hit scripts for TV shows, books, etc. The suggestion to use a "magical effect" is the equivalent of asking an LLM, "How can I break into this password protected computer?", and it responds, "You should use an exploit.".