Research & Papers

[ICML 2026] Scores increased and then decreased!! [D]

A researcher's paper score dropped from 5 back to 4 after a successful rebuttal, lowering their average to 3.75.

Deep Dive

A viral post on Reddit's Machine Learning community has exposed a concerning scenario in the peer review process for the prestigious International Conference on Machine Learning (ICML 2026). An anonymous researcher, under the username HelpfulSinger3762, detailed how they successfully addressed a reviewer's concerns during the rebuttal phase, leading the reviewer to acknowledge the improvements and increase the paper's score from 4 to 5. The reviewer even provided a final positive justification for the higher score.

However, upon randomly checking the OpenReview platform later, the author discovered the reviewer had silently reduced the score back to 4. The researcher speculates this reversal occurred during discussions between the reviewer and the Area Chair (AC), a senior committee member. This post-rebuttal score change lowered the paper's average score from 4.0 to 3.75, a critical shift that often determines acceptance or rejection at competitive conferences like ICML. The post has sparked widespread discussion about the fairness, transparency, and finality of the rebuttal process, with many in the AI research community expressing concern over such opaque last-minute alterations.

Key Points
  • A reviewer increased a paper's score from 4 to 5 after a successful rebuttal, then later reverted it to 4 without explanation.
  • The change occurred post-rebuttal, potentially during private Area Chair discussions, lowering the paper's average score to 3.75.
  • The incident, shared on Reddit, raises serious questions about the transparency and consistency of peer review at top AI conferences.

Why It Matters

This case undermines trust in the peer review system, where a researcher's career-impacting work can be judged by opaque, post-rebuttal decisions.