You don't necessarily have to store the parameters in fp32 for gradient updates; I experimented with it and got it working (all parameter full fine-tuning) with parameters being as low as 3-bit (a little bit more than 3-bit, because the block-wise scales were higher precision), which is essentially as low as you can go before "normal" training starts breaking down.