Research & Papers

[D] ICML 2026 review policy debate: 100 responses suggest Policy B may score higher, while Policy A shows higher confidence

Early poll of 100 submissions reveals a 0.17 point score gap between two controversial review policies.

Deep Dive

A viral Reddit survey analyzing ICML 2026's peer review process has collected 100 responses, revealing preliminary trends between two different submission policies. Policy B submissions showed a higher average score of 3.43 compared to Policy A's 3.26—a 0.17 point difference that, while not statistically conclusive, suggests potential scoring disparities. Interestingly, this pattern reverses for reviewer confidence, where Policy A reviews averaged 3.53 confidence versus 3.35 for Policy B, creating an inverse relationship between scores and reviewer certainty.

The survey also examined subjective experiences, finding that 67.8% of Policy A authors felt scores were 'harsher than expected' compared to 58.5% for Policy B. Meanwhile, Policy B saw more 'lenient than expected' responses (12.2% vs 3.4%). The researcher emphasizes this is descriptive data from a self-selected community poll with possible biases, not causal proof. However, the patterns highlight ongoing debates about fairness and consistency in AI conference reviewing, especially as conferences experiment with different submission formats and review methodologies.

Key Points
  • Policy B submissions averaged 3.43 score vs 3.26 for Policy A (100 responses)
  • Policy A reviews showed higher confidence (3.53) than Policy B (3.35)
  • 67.8% of Policy A authors felt scores were harsher than expected vs 58.5% for Policy B

Why It Matters

Reveals potential scoring inconsistencies in major AI conferences that could affect paper acceptance rates and research visibility.