For "bug" reproduction purposes. It is easier to debug a model if the same string always produces the same incorrect or strange LLM output, not every 100th time you run it.

If there is a bug (a behavior defined by whatever criteria), it is just a single path in a very complex systems with high connectivity.

This nonlinear and chaotic behavior regardless of implementation details of the black box makes LLM seem to be nondeterministic. But LLM is just a pseudo random number generator with a probability distribution.

(As I am writing this on my iPhone with text completion, I can see this nondeterministic behavior)