The hard part isn't noticing when papers are bad, it's deciding what to do afterwards
A viral AI safety essay argues the real challenge isn't spotting flaws, but deciding their significance.
A viral essay by LawrenceC on the LessWrong forum is sparking debate in AI research circles by critiquing shallow paper criticism. The author, known for his own critical takes, argues that the real intellectual work begins after identifying a flaw—determining whether it actually invalidates a paper's core claims. He observes a common pattern, especially on social media platforms like Twitter, where critics find one methodological issue and dismiss entire papers, a practice he compares to his younger self scoring debate points rather than seeking truth.
LawrenceC contends that while finding flaws is easy, deeply investigating a paper's claims requires significant time and cognitive effort. He suggests this leads to a proliferation of low-effort critiques that don't advance understanding. To move beyond 'dunking,' he proposes three concrete rules for better paper evaluation: first, understand the paper well enough to summarize it to the authors' satisfaction; second, focus criticism on issues fatal to core claims, not typos or formatting; third, steelman ambiguous methodological choices before attacking them. The essay resonates because it addresses a tension in fast-moving fields like AI safety, where the volume of research can incentivize quick, negative reactions over nuanced analysis.
- Essay critiques 'gotcha' culture in AI paper reviews, where surface flaws lead to full dismissal.
- Argues the hard part is evaluating if a flaw actually undermines a paper's core scientific claim.
- Proposes three rules: understand the paper fully, distinguish fatal from minor flaws, and steelman arguments first.
Why It Matters
For professionals evaluating AI research, it advocates for substantive critique over performative criticism, improving field discourse.