I’m skeptical of the method but excited for the direction. Giving models different personalities is adjacent to giving models different values / morals. Having a diversity of model personalities is a step in the right direction.
Unfortunately, this research seems to use a very coarse method (giving the model instructions to be evil and then measuring its activation changes against a “non evil” model). However, this is not a self supervised approach — it requires you input your own heavy handed concept of persona into the system. Obviously a more complex and complete personality is more than the sum of your yes/no answers to personality test questions.
However, it’s very possible with low rank methods to soon perhaps be able to give models long lived, user-specific personalities that emerge across thousands of conversations. That’s what I would happily call a persona vector.