AI/ML Conferences [D]
Authors address all reviewer concerns, yet see score hikes and rejection—what gives?
A disheartened ML researcher sparked a viral discussion on Reddit after detailing the ICML 2026 review experience. The post describes a pattern where authors fully address reviewers’ concerns, leading to significant score improvements, yet papers are still rejected—raising serious questions about the fairness and consistency of the review process. With the number of submissions to A* conferences like ICML skyrocketing, the current system appears unable to handle the load without introducing arbitrary or inconsistent decisions.
The community is now debating alternatives: from scaling up reviewer pools and implementing double-blind revisions to leveraging AI-assisted triage or post-rejection appeals. The core issue remains how to balance rigorous peer review with the practical constraints of handling tens of thousands of papers. For early-career researchers, the unpredictability can derail funding and career progression. Fixing this is critical to maintaining trust in the leading AI/ML venues.
- Papers rejected at ICML 2026 even after authors address all reviewer concerns and scores increase substantially.
- The review system for A* AI/ML conferences is overwhelmed by the sheer volume of submissions.
- Researchers are calling for systemic reforms, including larger reviewer pools, AI-assisted review, and fairer appeal processes.
Why It Matters
Flawed reviews at top ML conferences can unfairly damage careers and discourage valuable research contributions.