> Also activations aren't quantized, I think?

The very last conclusion: "Future work will focus on the implementation of binary normalization layers using single-bit arrays operations, as well as on quantizing layer activations to 8 or 16-bit precision. These improvements are expected to further enhance the efficiency and performance of the binary neural network models."