[D] TMLR reviews seem more reliable than ICML/NeurIPS/ICLR
A researcher's viral post claims TMLR reviews are more thorough and constructive than rushed ICML feedback.
A researcher's firsthand account of the peer review process is sparking debate about quality in AI academia. After submitting to the International Conference on Machine Learning (ICML) and experiencing reviews at the Transactions on Machine Learning Research (TMLR) and the International Conference on Learning Representations (ICLR), the author concludes TMLR's review quality is significantly superior. They criticize many ICML reviews as appearing rushed, exhibiting low confidence from reviewers, or being overly hostile without offering constructive improvement paths. In contrast, TMLR reviewers are praised for their topic awareness, reasonable questioning, and appropriate concern.
This comparison challenges the prestige hierarchy in AI publishing. Major conferences like ICML, NeurIPS, and ICLR operate on tight, ~4-month review cycles to meet publishing deadlines, which may compromise review depth. TMLR, JMLR's journal, employs a rolling, journal-style review process that may allow for more thorough evaluation. The viral post raises a critical question for the community: as submission volumes explode, are the flagship conferences still worth the effort if review quality falters, or should researchers prioritize venues like TMLR for meaningful feedback?
- A researcher's viral post claims TMLR reviews are more knowledgeable and constructive than those at ICML, NeurIPS, and ICLR.
- ICML reviews were described as often rushed, low-confidence, or hostile, potentially due to tight ~4-month decision cycles.
- The critique questions the value of high-pressure conference submissions versus journal-style processes like TMLR's for quality feedback.
Why It Matters
Peer review quality directly impacts research progress and career advancement for AI scientists and engineers.