Why do they need to run benchmarks to confirm performance? Can't they run an example prompt and verify they get the exact same output token probabilities for all prompts? The fact that they are not doing this makes me suspicious that they are in fact not doing the exact same thing as vLLM.

It is also a bit weird that they are not incorporating speculative decoding, that seems like a critical performance optimization, especially for decode heavy workloads.

Yes, speculative decoding will make both us and VLLM faster, but we believe it would be a relatively even bump on both sides, so we didn't include it in this comparison. Worth another test!

> Can't they run an example prompt and verify they get the exact same output token probabilities for all prompts?

You don’t even get that with GPUs in general, or really floating point in general.

The Art of Computer Programming. Volume 2: Seminumerical Algorithms section 4.2.2 with explain where it loses floating addition associativity property.

Apartness relations are another possible lens.

> It is also a bit weird that they are not incorporating speculative decoding

Wouldn’t speculative decoding decrease overall throughput, but optimise (perceived) responsiveness?

For compute bound region(high batch size) yes, but for low batch size it could improve the throughput.