Research & Papers

ICML final decisions rant [D]

With 24,000 submissions, only 6,500 accepted—and reviewers punished for raising scores.

Deep Dive

ICML 2025's final decisions ignited a firestorm among AI researchers. The conference accepted approximately 6,500 papers out of 24,000 submissions, a 27% acceptance rate that inevitably sends a massive wave of rejections straight into NeurIPS's pipeline. This annual cascade creates an endless loop of high volume, low acceptance, and mounting reviewer fatigue. Critics argue the system encourages rushed work (papers completed over a single weekend) and punishes deep, simmered research.

The review process itself faces sharp criticism. Common complaints include reviewers dismissing papers for not including enough benchmarks (e.g., “only 200 benchmarks”) or rejecting solid work on gut-feeling novelty grounds. Area chairs often reiterate initial review points without engaging with rebuttals, effectively nullifying the rebuttal process. Perhaps most concerning: reviewers who raise their scores after discussion are required to submit formal justifications, creating a negative reinforcement that discourages score adjustments. This environment—combined with the feeling that rejection (and even acceptance) now carries little meaning—threatens the core purpose of academic publishing: understanding long-standing problems through rigorous, fair evaluation.

Key Points
  • ICML accepted ~6,500 of ~24,000 papers (27%); rejects will cascade to NeurIPS, amplifying submission volumes.
  • Reviewers face negative reinforcement: raising a score triggers additional justification tasks.
  • Common review flaws include vague novelty rejections, low-benchmark complaints, and ACs ignoring rebuttals.

Why It Matters

Flawed conference review cycles undermine research quality, penalize thoughtful work, and waste thousands of researchers' time.