A spiking neural network encodes analog values through time based encoding. The duration between two transitions encodes an analog value with a single connection in a similar manner to PWM. You need fewer connections and the gaps between transitions are larger.
For those who don't know why this matters. Transistors and all electrical devices including wires are tiny capacitors. For a transistor to switch from one state to another it needs to charge or discharge as quickly as possible. This charging/discharging process costs energy and the more you do it, the more energy is used.
A fully trained SNN does not change its synapses, which means that the voltages inside the routing hardware, that most likely dominate the energy costs by far, do not change. Meanwhile classic ANNs have to perform the routing via GEMV over and over again.