>As far as I am aware, FPGAs can only be reprogrammed a fixed # of times, so you won't get very far into the open sea before you run out of provisions
That's not true unless you're talking about mask programmed FPGAs where the configuration is burned into the metal layers to avoid the silicon area overhead of configuration memory and even in this case the finite number is exactly one, because the FPGA comes preprogrammed out of the fab.
Almost every conventional FPGA stores its configuration in SRAM. This means you have the opposite problem. You need an extra SPI chip to store your FPGA configuration and program the FPGA every time you start it up.
The big problem with SNNs is that there is no easy way to train them. You train them like ANNs with back propagation, which means SNNs are just an exotic inference target and not a full platform for both training and inference.