That should be easy to test: test a 16 bit model on various benchmarks, once with fresh context and once with the context filled up with irrelevant tokens. Record the relative performance degradation, and then do the same for a quantized model. Compare whether the quantized model has a significant relatively larger performance drop from context rot. If so, numerical error should be the cause.