That's actually correct and intentional. TurboQuant applies the same rotation matrix to every vector. The key insight is that any unit vector, when multiplied by a random orthogonal matrix, produces coordinates with a known distribution (Beta/arcsine in 2D, near-Gaussian in high-d). The randomness is in the matrix itself (generated once from a seed), not per-vector. Since the distribution is the same regardless of the input vector, a single precomputed quantization grid works for everything. I've updated the description to make this clearer.
Thanks. However, from this visualization it's not clear how the random rotation is beneficial. I guess it makes more sense on higher dimensional vectors.