[D] ICML Rebuttal Question
Researchers face rejection despite achieving 'groundbreaking' results that surprised their entire field.
A viral Reddit post from an anonymous AI researcher has sparked a major discussion about the peer-review process at top conferences like ICML (International Conference on Machine Learning). The researcher's paper, which reportedly achieves 'groundbreaking' results that surprised experts in their field, is facing rejection primarily on the grounds of lacking 'novelty.' This critique persists even though the authors claim their method significantly outperforms all existing baselines and, crucially, a set of benchmarks it was not expected to beat. The central conflict pits raw, demonstrable performance against the subjective and often rigid requirement for conceptual novelty.
The researcher explains that their breakthrough came from an unexpected combination: applying existing components from *outside* their specific domain alongside some novel contributions. While reviewers are fixated on the re-use of known elements, the community's reaction to the results suggests a genuine advance. The post asks for strategic advice on rebutting this 'strawman' argument, touching a nerve for many who see it as a systemic flaw. It underscores how review criteria can sometimes stifle practical, high-impact engineering work that repurposes tools in clever new ways, favoring instead purely theoretical innovation.
The debate goes to the heart of progress in fast-moving AI fields. If a method delivers unprecedented performance and reveals new insights ('pinpoint[ing] the reasons why'), should it matter if its building blocks are familiar? This incident reflects a growing tension between different modes of research—increimental engineering versus radical invention—and questions whether conference gatekeeping is optimally calibrated to recognize real-world impact.
- Researchers achieved 'groundbreaking' results that outperformed all baselines, including theoretically superior ones.
- ICML reviewers are recommending rejection based solely on a perceived lack of 'novelty,' ignoring performance gains.
- The method innovatively combined existing components from other domains with new elements, creating an unexpected advance.
Why It Matters
Highlights a potential flaw in AI peer review that could reject high-impact work for overly rigid definitions of innovation.