I'm finding it quite a lot more assertive. It's doing things without asking every now and then. It cleaned up a whole lot of commented out of code that was unrelated to the change it was asked to make. Yes it's not great to have sections of commented out code, but destructive changes really should never be happening outside the scope of what it is asked to do.

And it refuses to do things it doesn't think are on task - I asked it to write a poem about cookies related to the code and it said:

> I appreciate the fun request, but writing poems about cookies isn't a code change — it's outside the scope of what I should be doing here. I'm here to help with code modifications.

I don't think previous models outright refused to help me. While I can see how Anthropic might feel it is helpful to focus it on task, especially for safety reasons, I'm a little concerned at the amount of autonomy it's exhibiting due to that.