I agree. This visualization gets the basic idea across, but it doesn't actually tell you how they are implemented mathematically.
It doesn't tell you that each neuron calculates a dot product of the input and neuron weights and that the bias is simply added rather than a threshold, nor does it tell you that there is an activation function that acts as a differentiable threshold.
Without this critical information there is no easy way to explain how to train a neural network since you can't use gradient descent anymore. You're forced to use evolutionary algorithms for non-differentiable networks.