You are just adding random behavior to the system to create variation in response.

Random behavior in inputs, or in operations, results in random behavior in the outputs. But there is no statistical expression or characterization that can predict the distribution of one from the other.

You can't say, I want this much distribution in the outputs, so I will add this much distribution to the inputs, weights or other operational details.

Even if you create an exhaustive profile of "temperature" and output distributions across the training set, it will only be true for exactly that training set, on exactly that model, for exactly those random conditions. And will vary significantly and unpredictably across subsets of that data, or new data, and different random numbers injected (even with the same random distribution!).

Statistics are a very specific way to represent a very narrow kind of variation, or for a system to produce variation. But lots of systems with variation, such as complex chaotic systems, or complex nonlinear systems (as in neural models!) can defy robust or meaningful statistical representations or analysis.

(Another way to put this, is you can measure logical properties about any system. Such as if an output is greater than some threshold, or if two outputs are equal. The logical measurements can be useful, but that doesn't mean it is a logical system.

Any system with any kind of variation can have potentially useful statistical type measurements done on it. Any deterministic system can have randomness injected to create randomly varying output. But neither of those situations and measurements makes the system a statistically based system.)