Claude responds differently to "think", "think hard", and "think very hard". Just because it's hidden to you, doesn't mean a user doesn't have a choice.
Saying gpt-3.5-turbo is better than gpt-5.2 makes me think something you got some of them hidden motives.
https://code.claude.com/docs/en/common-workflows#use-extende...
’’’Phrases like “think”, “think hard”, “ultrathink”, and “think more” are interpreted as regular prompt instructions and don’t allocate thinking tokens.’’’
They dont allocate thinking tokens but they do change model behavior.
I was getting this in my Claude code app, it seems clear to me that they didn’t want users to do that anymore and it was deprecated. https://i.redd.it/jvemmk1wdndg1.jpeg
Thx for the correction. Changed a couple weeks ago. https://decodeclaude.com/ultrathink-deprecated/
Nice blog, this post is interesting: https://decodeclaude.com/compaction-deep-dive/ Didn't know about Microcompaction!
If you're a big context/compaction fan and want another fun fact, did you know that instead of doing regular compaction (prompting the agent to summarize the conversation in a particular way and starting the new conversation with that), Codex passes around a compressed, encrypted object that supposedly preserves the latent space of the previous conversation in the new conversation.
https://openai.com/index/unrolling-the-codex-agent-loop/ https://platform.openai.com/docs/guides/conversation-state#c...
Context management is the new frontier for these labs.