Technical clarification on TurboQuant / RaBitQ for people following the recent TurboQuant discussion
First author Jianyang Gao alleges TurboQuant paper misrepresents his team's prior work and experimental setup.
Jianyang Gao, the lead author of the RaBitQ quantization research, has issued a detailed public clarification to address what he describes as "substantial confusion" created by the promotion of a newer method called TurboQuant. In a post aimed at the technical community, Gao outlines three core concerns: that TurboQuant's description of the foundational RaBitQ method is materially incomplete, its theoretical claims against RaBitQ are unsupported, and its empirical comparisons lack full disclosure of the testing setup. The post states these issues were raised privately with the TurboQuant authors months ago, yet the problematic statements remain in their submission for the ICLR 2026 conference.
Gao specifies that TurboQuant's paper omits a key component of RaBitQ—the Johnson-Lindenstrauss transformation for random rotation—even after conference reviewers requested clarification. He also contests the paper's characterization of RaBitQ's performance guarantees as "suboptimal," noting his team had already proven asymptotic optimality. Most critically, emails reveal the TurboQuant authors acknowledged running the RaBitQ baseline on a single CPU with multiprocessing disabled, while their own method was benchmarked on an A100 GPU, a disparity not clearly disclosed in the public paper. This public statement serves as a formal record to correct the technical narrative ahead of the ICLR conference.
- TurboQuant's paper allegedly provides an incomplete description of the prior RaBitQ method, omitting its key Johnson-Lindenstrauss transformation step.
- Theoretical claims labeling RaBitQ's guarantees as "suboptimal" are contested, as RaBitQ authors had already proven asymptotic optimality in a 2024 paper.
- Empirical comparisons are questioned, as internal emails reveal RaBitQ was benchmarked on a single CPU while TurboQuant ran on an A100 GPU.
Why It Matters
This highlights critical issues of academic integrity and fair benchmarking in fast-moving AI research, affecting which techniques gain traction.