AI Safety

Searchable explorer of EA Forum & LessWrong posts with explicit cruxes or "change my mind" content

AI-assisted tool filters EA Forum and LessWrong for pivotal 'change my mind' arguments in 1-2 hours.

Deep Dive

David Reinstein, as part of The Unjournal's Pivotal Questions project, has launched an early-stage, searchable database tool designed to surface critical arguments from the EA Forum and LessWrong. The tool specifically targets posts containing explicit "cruxes" (core beliefs on which a disagreement hinges), "change my mind" statements, hinge beliefs, and research-blocking open questions. Built with AI-assisted curation in roughly 1-2 hours, it currently catalogs approximately 39 posts from April 2024 to April 2026. Users can filter entries by signal type, cause area (with a current tilt toward AI safety, AI welfare, and cause prioritization), forum source, and relevance to The Unjournal's rigorous evaluation work.

The explorer provides a filterable table where each entry is tagged with the crux or change condition, an assessment of its tractability for academic-style research, and a candidate "Pivotal Question mapping." This allows researchers and community members to efficiently navigate complex debates and identify the most decision-relevant, unresolved questions. The project aims to bridge online forum discussions with formal research agendas, directly feeding candidate questions into The Unjournal's evaluation and synthesis process. Reinstein notes the coverage is patchy and welcomes feedback via a Hypothes.is sidebar or a "Suggest entry" button, with plans to maintain and extend the database if the community finds it useful for shaping research and career plans.

Key Points
  • AI-assisted tool built in 1-2 hours maps ~39 forum posts with explicit 'change my mind' arguments from 2024-2026.
  • Allows filtering by crux type and cause area, currently focused on AI safety, AI welfare, and cause prioritization.
  • Designed to feed pivotal questions directly into The Unjournal's research evaluation and help inform career/research plans.

Why It Matters

It systematically connects online debate to actionable research, helping prioritize the most critical unresolved questions in AI safety and effective altruism.