I'm pretty sure that the canonical choice is either choosing vectors to be anchor - either by a knn distance with other vectors, or by "hand", or even stuff like cross entropy - but then that is already in the loss function. another method would be to create some kind of adversarial setup where the output is "stretched" intentionally and then criticized by another llm. afaik the problem is with scale, as manually going through a bunch of vectors to just ground the latent isnt exactly economical. also people are quite conservative, esp in the big model runs - stuff like muon isnt exactly popularized till the new qwen or kimi. obviously this is all speculation for open models and folks with more experience can chime in.
Maybe do something close to what I like to believe the brain does and have a meta model wrap a "base" model. The meta model gets the output data from the base model (edit: plus the original input) as input plus some meta parameters (for example the probability each token had when it was chosen and/or better which "neurons" were activated during the whole output sequence which would include the Persona they mention). It's then the meta model that generates new output data based on this input and this is the output that is shown to the user.
Can you describe the "meta" model more ? afaict it seems like you are describing a "router"? I think what you are thinking of is essentially what MoE does, or in diffusion, a sort of controlnet-like grounding (different exact mechanism, similar spirit).