Research & Papers

[D] TurboQuant author replies on OpenReview

Quantization paper controversy erupts as authors claim novelty in exact distribution derivation, not borrowed from RaBitQ.

Deep Dive

A technical controversy has erupted in the AI quantization community, centered on the paper 'TurboQuant.' The authors have posted a detailed response on OpenReview to address claims that their core method was derived from the earlier RaBitQ technique. They firmly state that random rotation is a standard, ubiquitous technique in quantization literature, citing several pre-existing works, and that TurboQuant's novelty lies specifically in their derivation of the exact statistical distribution followed by the coordinates of rotated vectors. This derivation, they argue, enables optimal coordinate-wise quantization and constitutes their primary contribution.

The response tackles several points of contention. First, they address the characterization of RaBitQ as 'suboptimal,' explaining that while the main theorem's formal statement lacked an explicit optimality guarantee, a deeper investigation of its appendix revealed a strict bound. Consequently, they are updating the TurboQuant manuscript to accurately credit RaBitQ's bounds. Second, they defend their experimental focus, arguing that runtime benchmarks are 'immaterial' to their findings, which are centered on the compression-quality tradeoff at extreme compression levels. Finally, they note the paper has been on arXiv since April 2025 and that prior communication with RaBitQ authors occurred, questioning the timing of the raised concerns.

Key Points
  • Authors defend TurboQuant's novelty as deriving the 'exact distribution' of rotated vector coordinates, not using RaBitQ's method.
  • Will update manuscript to credit RaBitQ's optimality bounds after re-examining its appendix, correcting an initial 'suboptimal' characterization.
  • Argue runtime benchmarks are immaterial; core contribution is maintaining high accuracy at extreme compression levels, not specific speedup.

Why It Matters

Highlights the intense scrutiny and competition in model quantization research, where claims of novelty and credit are critically important for academic and commercial advancement.