> It's hard to tell where the prompt stops and the CoT response starts, in fact.
That's because you're looking at the final output that includes neither the prompt nor the intermediate chain of thought.
> It's hard to tell where the prompt stops and the CoT response starts, in fact.
That's because you're looking at the final output that includes neither the prompt nor the intermediate chain of thought.
Good point -- I can see that, but it all ends up in the same context, anyway. Point being, the model seems to prefer to conserve tokens.
That said, now I'm wondering if all those dashes it spews out are more than just window dressing.