[D] CVPR results shock due to impressive score drop since reviews
Paper scores dropped dramatically post-rebuttal due to missing benchmark upload, despite positive initial reviews.
The CVPR 2024 review decisions have sparked significant controversy after a researcher reported dramatic score drops post-rebuttal, despite addressing reviewer concerns. The paper initially received scores of 6, 4, and 2 from three reviewers, with the first being enthusiastic, the second having concerns, and the third raising heavier issues including the absence of uploaded benchmark results. During rebuttal, the author submitted results to the required online platform and informed reviewers. Shockingly, the final scores dropped to 4, 2, and 2, with previously positive reviewers now citing the benchmark submission as a key issue. The first reviewer maintained liking the method but penalized the procedural delay, while the second remained unconvinced despite careful addressing of concerns. The third reviewer's score stayed at 2. This case reveals potential flaws in CVPR's review mechanics, where a single procedural issue (benchmark submission timing) appears to have disproportionately influenced multiple reviewers, possibly through Area Chair commentary. The incident raises critical questions about consistency in AI conference reviews, where methodological merit may be overshadowed by administrative compliance. With CVPR being a premier computer vision conference with acceptance rates typically around 25%, such scoring volatility affects careers and research directions. The community is now debating whether reviewers should focus more on technical substance versus procedural adherence, and how rebuttal processes can be improved to prevent similar situations.
- Initial scores 6/4/2 dropped to 4/2/2 post-rebuttal despite benchmark submission
- Two previously positive reviewers lowered scores citing procedural benchmark issue
- Raises questions about CVPR review consistency and Area Chair influence on scoring
Why It Matters
Highlights potential flaws in top AI conference reviews where procedural issues may outweigh technical merit, affecting research careers.