public reviews in conferences [D]
ICLR's open review system sparks debate on transparency and reviewer accountability.
A recent Reddit discussion is stirring up the AI community: why don't all conferences publish their peer reviews like ICLR does? The original poster notes several benefits: public reviews give researchers a direct window into how experts in the field evaluate a paper, making the process more transparent and educational. They also argue that knowing their critiques will be public pushes reviewers to invest more effort and care, improving the overall quality of feedback.
However, the idea isn't without pushback. Some worry that public reviews, even with masked identities, could lead to off-screen retaliation or discourage honest, critical feedback. Others point out that not all conferences have the infrastructure or culture to manage such openness. The question remains: would a widespread shift toward public reviews strengthen or weaken the AI research ecosystem? The ICLR model has worked well for many, but its adoption at scale would require careful consideration of incentives, ethics, and logistics.
- ICLR public reviews let researchers see how peers evaluate work, improving understanding and transparency.
- Reviewers may spend more effort knowing their critiques are visible to the community.
- Drawbacks include potential for reduced candor and need for robust identity masking and moderation.
Why It Matters
This debate could reshape AI conference norms toward greater openness, accountability, and educational value for the community.