You're essentially summoning a character to role-play with. Just like with esoteric evocation, it's very easy to summon the wrong aspect of the spirit. Anthropic has a lot to say about this:
https://www.anthropic.com/research/persona-selection-model
You're essentially summoning a character to role-play with. Just like with esoteric evocation, it's very easy to summon the wrong aspect of the spirit. Anthropic has a lot to say about this:
https://www.anthropic.com/research/persona-selection-model
Unfortunately (after reading your links) all of the control surfaces for mitigating spirit summoning seem to be in the model training, creation and tuning not something you can change meaningfully through prompting.
Perhaps the LLM itself, rather than the role model you created in one particular chat conversation or another, is better understood to be the “spirit.”
As a non-coder who only chats with pre existing LLMs and doesn’t train or tune them, I feel mostly powerless.
As I understand it, it's more that the training (and training data set) bake in the concept attractor space (https://arxiv.org/abs/2601.11575). So the available characters are fixed, yes, and some are much stronger attractors than others. But we still have a fair amount of control over which archetype steps into the circle. As an aside, this is also why jailbreaking is fundamentally unsolved. It's not difficult to call the characters with dark traits. They're strong attractors, in spite of (or because of?) the effort put into strengthening the pull of the Assistant character.
> As a non-coder who only chats with pre existing LLMs and doesn’t train or tune them, I feel mostly powerless.
You realize in regards to only using and not training LLMs you are in the triple 9 majority right. Even if we only considered so called coders
I present you
NVIDIA Nemotron-Personas-USA — 1 million synthetic Americans whose demographics match real US census distributions
https://huggingface.co/datasets/nvidia/Nemotron-Personas-USA
I am polite when using AI, not because I mistake it for a human, but because I'm deliberately keeping it in the "professional colleague" persona. Tell it to push back, and then thank it for something it finds in your error. I may put a small self-deprecating joke in from time to time. It keeps the "mood" correct.
Another way you can think of it is that when you're talking to an AI, you're not talking to a human, you're talking to distillation of humanity, as a whole, in a box. You want to be selective in what portion of humanity you are leading to be dominant in a conversation for some purpose. There's a lot in there. There's a lot of conversations where someone makes a good critical point and a flamewar is the response. A lot of conversations where things get hostile. I'm sure the subsequent RHLF helps with that, but it doesn't hurt anything to try to help it along.
I see people post their screenshots of an AI pushing back and asking the user to do it or some other AI to do it, and while I'm as amused as the next person, I wonder what is in their context window when that happens.
Agreed, putting effort into my side of the role-play almost always improves the model's responses. The attention required to do that also makes it more likely that I'll notice when the conversation first starts going off the rails: when it hits the phase transition (https://arxiv.org/abs/2508.01097). It does still seem important to start new chats regularly, regardless of growing context sizes.
> you're talking to distillation of humanity, as a whole, in a box.
This is an aside, but my impression is that it is a very selective and skewed distillation, heavily colored by English-language internet discourse and other lopsided properties of its training material, and by whoever RLHF’d it. Relatively far away from being representative of the whole of humanity.
Similar approach works for me. But then I also have a separate checks at the end of the session basically questioning the premise and logic used for most things except brainstorming, where I allow more leeway. You can ask to be challenged and challenged effectively, but now I wonder if people do that.
Spot on.