It should also be noted that PyTorch has a page about reproducibility: https://docs.pytorch.org/docs/stable/notes/randomness.html

TL;DR

Seed your PRNGs and call torch.use_deterministic_algorithms(True) to get the deterministic kernels. They may be slightly slower, but in practice, you probably will not notice.

Note that results will still differ between different drivers and GPUs. It would be great if NVIDIA tried harder in that regard.

The blog post is about LLM non-determinism in the context of serving at scale (variable batch size). The page you link is only about run-to-run determinism implicitly assuming a fixed batch size.