Interesting that it's not a direct "you should" but an omniscient 3rd person perspective "Claude should".
Also full of "can" and "should" phrases: feels both passive and subjunctive as wishes, vs strict commands (I guess these are better termed “modals”, but not an expert)
“Claude” is more specific than “you”. Why rely on attention to figure out who’s the subject? Also it is in their (people from Anthropic) believe that rule based alignment won’t work and that’s why they wrote the soul document as “something like you’d write to your child to show them how they should behave in the world” (I paraphrase). I guess system prompt should be similar in this aspect.
Yes I was interested in that too. It suggests that in writing our own guidance for we should follow a similar style, but I rarely if ever see people doing that. Most people still stick to "You" or abstract voice "There is ..." "Never do ..." etc.
It must be that they are training very deeply the sense of identity in to the model as Claude. Which makes me wonder how it then works when it is asked to assume a different identity - "You are Bob, a plumber who specialises in advising design of water systems for hospitals". Now what? Is it confused? Is it still going to think all the verbiage about what "Claude" does applies?
I almost exclusively use the royal We. "We are working on a new feature and we need it to meet these requirements...", "it looks like we missed a bug, let's take another look at.."
I also talk this way with people because I feel it makes it clear we're collaborating and fault doesn't really matter. I feel it lets junior memberstake more ownership of the successes as well. If we ever get juniors again.
That’s because Anthropic does not consider their model as having personality but rather that it simulates the experience of an abstract entity named Claude.
That sounds really interesting, but my google-fu is not up to task here, I'm getting pages and pages of nonsense asking if Claude is conscious. Can you elaborate?
I actually think this is pretty straightforward if you think of it something like
Just like a "Cat" object in Java is supposed to behave like a cat, but is not a cat, and there is no way for Cat@439f5b3d to "be" a cat. However, it is supposed to act like a cat. When Anthropic spins up a model and "runs" it they are asking the matrix multipliers to simulate the concept of a person named Claude. It is not conscious, but it is supposed to simulate a person who is conscious. At least that is how they view it, anyway.You can read the latest Claude Constitution plus more info here:
https://www.anthropic.com/news/claude-new-constitution