Why I'm Less of a Shill for Related Work Sections
A prominent AI safety researcher argues that forcing related work sections often fails to improve literature engagement.
LawrenceC, a researcher in the AI safety community, has published a reflection on the value of related work sections in technical writing, particularly on platforms like LessWrong and the Alignment Forum. In 2022, he was a strong proponent, arguing they were crucial for connecting independent research with academic literature, preventing duplication of effort, and improving communication with mainstream academics. He pushed against common objections, such as the fear of "sharing credit" or the belief that academic work had no value for novel AI safety topics.
Four years later, his position has shifted. He notes that as AI safety has become more mainstream, the pressure to cater to academic norms has lessened. More critically, he has become disillusioned with the execution. He observes that mandatory related work sections often fail to achieve their goal of deep literature engagement, instead resulting in authors merely listing paper names or, worse, misrepresenting cited work after only reading abstracts. He also points out that the rise of LLMs has made generating these sections with minimal real effort even easier, further degrading their intended purpose of fostering genuine scholarly dialogue and thorough prior art review.
- Advocated in 2022 to connect independent AI safety work with academia and prevent redundancy.
- Now disillusioned, finding sections often feature superficial or misrepresented citations without deep engagement.
- Notes the rise of LLMs makes generating related work with minimal real effort easier than ever.
Why It Matters
Challenges a core academic norm, suggesting quality of literature review may be more important than checking a box.