If you think that confusing message provenance is part of how thinking mode is supposed to work, I don't know what to tell you.
If you think that confusing message provenance is part of how thinking mode is supposed to work, I don't know what to tell you.
There is no "message provenance" in LLM machinery.
This is an illusion the chat UX concocts. Behind the scenes the tokens aren't tagged or colored.
I am aware. That is not what the guy above was suggesting, nor what was I.
Things generally exist without an LLM receiving and maintaining a representation about them.
If there's no provenance information and message separation currently being emitted into the context window by tooling, the latter part of which I'd be surprised by, and the models are not trained to focus on it, then what I'm suggesting is that these could be inserted and the models could be tuned, so that this is then mitigated.
What I'm also suggesting is that the above person's snark-laden idea of thinking mode, and how resolvable this issue is, is thus false.