AI Safety

Fundamental Uncertainty, First Edition

New book argues truth is uncertain and dependent on human goals, targeting AI safety researchers.

Deep Dive

Gordon Seidoh Worley has published the first edition of his philosophical book, 'Fundamental Uncertainty,' now available to read online. The work, which evolved from a draft shared on the LessWrong forum, directly addresses core epistemological challenges identified during Worley's own AI safety research. Its central thesis is that our knowledge is fundamentally uncertain due to epistemic circularity, a problem known as the Problem of the Criterion. The book argues we manage this uncertainty by making pragmatic assumptions, meaning the truth we can know is not objective but dependent on what we care about and our goals.

Worley explicitly targets the book at a general STEM audience, with particular relevance for the rationalist community and those working on AI and AI safety. He states he wrote it to document the epistemological foundations necessary for pursuing AI safety research. An online version is live now, with print, ebook, and audiobook formats in development. Worley also announced plans for an upcoming essay contest related to the book's themes.

Key Points
  • Book argues knowledge is fundamentally uncertain due to the 'Problem of the Criterion', an epistemic circularity problem.
  • Posits that truth is not independent but grounded in human care and pragmatic goal-seeking.
  • Explicitly written for AI safety researchers, stemming from the author's own work in the field.

Why It Matters

Provides a philosophical framework for AI safety, challenging assumptions about objective truth in system design.