What I mean is that you want the total number of tokens to convey the information to the LLM to be as small as possible. If you’re having a discussion, you’ll have (perhaps incorrect) responses from the LLM in there, have to correct it, etc. All this is wasteful, and may even confuse the LLM. It’s much better to ensure all the information is densely packed in the original message.