Research & Papers

Post Rebuttal ICML Average Scores? [D]

AI tool exposes conference review chaos: one reviewer's 2 score tanked author's average to 3.5.

Deep Dive

A viral post on an academic forum has ignited discussion about the opaque and often frustrating peer review process at major AI conferences like ICML. The author, using the AI tool Paper Co-Pilot, discovered that their paper's average score of 3.5 was significantly impacted by a single reviewer who assigned a score of 2. This low score was based on a new concern raised only in the post-rebuttal phase, a concern that was originally raised—and dismissed as a non-issue—by a different reviewer. This incident underscores the subjective and sometimes contradictory nature of peer feedback, where a single reviewer's stance can dramatically alter a paper's fate.

The tool at the center of this discussion, Paper Co-Pilot, is providing unprecedented transparency into conference metrics. By aggregating user-reported scores, it revealed that an average score of 4.2 corresponds to being in the top 40% of submissions at ICML. This data point, previously hidden from authors, allows researchers to better contextualize their results. The incident has sparked a broader conversation about the need for more consistent, fair, and transparent review systems in fast-paced fields like machine learning, where publication in top-tier venues is critical for career advancement.

Key Points
  • Paper Co-Pilot AI tool shows an ICML average score of 4.2 is in the top 40% of papers.
  • A viral post details a reviewer giving a score of 2 post-rebuttal based on another reviewer's already-dismissed concern.
  • The incident highlights systemic issues with inconsistency and subjectivity in major AI conference peer reviews.

Why It Matters

Exposes flaws in academic publishing's gatekeeping, impacting researcher careers and the pace of scientific progress.