In theory, you could do that and increase the speed at higher temperatures, but it would subtly alter your output based on the draft model preferences. Rather than picking randomly from the main model probabilities, you would have to accept a draft model pick if it is close enough.
As far as I know, this is not used in practice. Currently popular implementations always match the main model output, and the draft model only affects the speed.
Here is the line in vLLM's source code that determines if a draft token is accepted:
It does have a branch that checks only token id equality, which is used if temperature is 0.