I was under the impression that, at least for models without "reasoning", asking them to be terse hampered their ability to give complete and correct answers? Not so?

> asking them to be terse hampered their ability to give complete and correct answers?

You can kind of guide both the reasoning and "final" answer individually in the system prompts, so you can ask it to revalidate everything during reasoning, explore all potential options and so on, but then steer the final answer to be brief and concise. Of course, depends a lot on the model, some respond to it worse/better than others.