Stop asking "how good is this" to decide between donation opportunities I recommend
A viral AI safety post argues donors should trust advisors' high bar, not compare individual opportunities.
In a viral post on the AI and rationality forum LessWrong, researcher Zach Stein-Perlman tackles a common donor dilemma. Prospective donors to AI safety and effective altruism causes often ask advisors to rank the cost-effectiveness of specific recommended opportunities, seeking the single best place for their money. Stein-Perlman argues this is the wrong question. He posits that a competent donation advisor's role is to maintain a high bar for recommendations; if they are doing their job, all opportunities they present should be roughly equally valuable for a marginal dollar. The value of that bar is high—he cites examples like $3 billion per 1% future-improvement for 501(c)(3) nonprofit opportunities.
The core concept is fungibility: your donation to one recommended cause typically displaces or substitutes for funding that would have come from another source directed by the same advisor. Therefore, trying to pick the 'best' one is often counterproductive. The post acknowledges important exceptions where marginal value can be higher, such as opportunities with legal donor caps (e.g., $7K per donor in politics), urgent or sensitive projects, or when a specific donor's identity provides unique advantages. For most donors, however, the advice is to trust the advisor's curated list and high standard, rather than attempting a granular comparison that the funding ecosystem itself renders moot.
- Advisors should maintain a high bar: All recommended donation opportunities are equally valuable for a marginal donor, eliminating the need to compare.
- Key exceptions exist: Higher value can be found in capped donations (e.g., $7K max in politics), urgent projects, or when a donor's identity is uniquely beneficial.
- The concept of 'funging' is central: Your donation to one cause typically substitutes for other funding, making intra-list comparisons ineffective.
Why It Matters
This framework could streamline billions in AI safety and EA funding by reducing donor decision paralysis and optimizing allocation.