Inkhaven menu, part 2
The 'Real AI' professor's eclectic list includes coral reef funding, epistemic humility, and sci-fi story premises.
David Scott Krueger, a professor and researcher often associated with the 'Real AI' movement and AI safety, has shared a second menu of potential blog topics for his personal site, Inkhaven. Following a previous list of 23 AI-adjacent ideas, this new list of 22 topics deliberately steps outside the realm of artificial intelligence. The post offers a rare glimpse into the wide-ranging intellectual interests of a figure known primarily for his technical work, spanning philosophy, science, culture, and personal reflection.
Among the most striking proposals is a critical look at coral reef preservation, specifically referencing the work of the late researcher Ruth Gates. Krueger notes a project funded at under $5 million that he estimates could have yielded value ">2,000,000 times that"—a potential upside in the tens of trillions of dollars—and expresses frustration that this analysis received little attention on effective altruism forums. Other notable topics include an exploration of "radical epistemic humility," questioning how we know what we know in an uncertain world, and reflections on what happened to genuine counter-culture.
The list is notably personal and philosophical, featuring ideas like "What I want from a computer" (focusing on the removal of distractions), thoughts on "Genuine creativity" and whether it requires cultural isolation, and even "Sci-fi story premises" inspired by Kurt Vonnegut. This diversion from AI content underscores a common trait among deep technical thinkers: a drive to apply rigorous inquiry to all facets of human experience, from management lessons learned as a professor to the aesthetic realism of modern pop music.
- Lists 22 non-AI topics from coral reef economics ($5M project with >$10T upside) to epistemic humility.
- Reveals the broad philosophical and scientific interests of a leading AI safety researcher outside his field.
- Includes personal reflections on management, creativity, counter-culture, and the nature of "genuineness" as a superpower.
Why It Matters
Shows the interdisciplinary thinking driving AI safety leaders, highlighting that foundational progress requires looking beyond pure code.