I assume the reason it’s not baked in is so they can “hotfix” it after release. but surely that many things don’t need updates afterwards. there’s novels that are shorter.

Yeah that was the original idea of system prompts. Change global behaviour without retraining and with higher authority than users. But this has slowly turned into a complete mess, at least for Anthropic. I'd love to see OpenAI's and Google's system prompts for comparison though. Would be interesting to know if they are just more compute rich or more efficient.

Leaked/extracted system prompts for other chat models, particularly ChatGPT, are often around this size. Here's GPT-5.4: https://github.com/asgeirtj/system_prompts_leaks/blob/main/O...

Thanks, but that kind of confirms my belief. wc counts ~15k words in there. That may technically be the same order of magnitude, but it is only a quarter of Claude's and less than 2% of the context limit. So a lot more steering is baked into the model weights than into the prompt compared to Claude.

[deleted]