101 Humans of New York on the Risks of AI
In-person survey finds 98% of respondents want strict limits on superhuman AI development.
A researcher from the LessWrong community conducted a unique, in-person survey of 101 people in New York City to gauge public sentiment on AI risks, moving beyond traditional online polls. The door-to-door and street-level approach, which included Spanish-language interviews, revealed a stark disconnect between public anxiety and current regulatory frameworks. The core finding is a powerful consensus for caution: when asked about developing 'superhuman AI,' only 2 out of 96 respondents with an opinion supported proceeding under current rules or accelerating development.
The survey methodology highlighted the challenges of public polling on complex tech topics. Questions about supporting politicians who favor 'limits on developing more powerful AI' were frequently misunderstood, with some respondents conflating political control over AI with regulatory limits. Despite these complexities, the qualitative data painted a clear picture of widespread apprehension. Many who expressed initial excitement about AI's potential reversed their stance when confronted with hypotheticals about superintelligent systems beyond human control, indicating that public concern deepens with understanding of the long-term risks.
- 98% of surveyed New Yorkers (94 of 96) support serious regulation or a slowdown for superhuman AI development.
- The in-person, door-to-door method captured nuanced fears often missed in online surveys, including anxiety about loss of control.
- Survey design challenges, like public confusion over the term 'limits,' underscore the difficulty of polling on complex AI policy.
Why It Matters
This ground-level data reveals strong public demand for governance, putting pressure on policymakers and tech companies to prioritize safety over speed.