Research & Papers

Getting sabotaged by a reviewer at IJCAI [D]

An AI researcher alleges a reviewer made false claims and demanded policy-violating extra experiments.

Deep Dive

A viral post on an AI research forum has ignited a discussion about peer review integrity, centering on an anonymous submission to the International Joint Conference on Artificial Intelligence (IJCAI). The author claims a reviewer returned a critique containing false statements about unexplored content that was "clearly shown in the paper," suggesting the reviewer did not engage with the work thoroughly. Furthermore, the reviewer allegedly demanded the authors conduct additional experiments based on a specific, uncited work—a request the author states is explicitly "against IJCAI policy." The post frames this not as a difference of opinion but as deliberate sabotage, putting the onus on the researcher to navigate a flawed system.

The incident has sparked a broader conversation within the AI/ML community about the pressures and potential failures of the conference review process. Researchers are debating the best course of action: Should the author use the official "chairing tool" to escalate the issue to the Program Committee (PC), and will the PC respond? Or should they comply with the inappropriate demand by adding the extra experiments to their rebuttal, potentially setting a bad precedent? This case underscores the high-stakes nature of publication in top venues like IJCAI, where a single negative review can derail months of work, and questions the mechanisms available for researchers to challenge unfair assessments.

Key Points
  • Researcher alleges an IJCAI reviewer made false claims about content not being explored when it was clearly in the paper.
  • Reviewer demanded extra experiments based on an uncited work, which the author states violates official IJCAI conference policy.
  • The viral post seeks advice on escalating to the Program Committee, highlighting systemic peer review challenges in AI.

Why It Matters

This case exposes critical flaws in AI's high-stakes peer review system, where researcher careers hinge on often-anonymous, unaccountable evaluations.