Communicating with people who disagree on "obvious" things
When 'obvious' truths exclude newcomers and stifle honest debate in AI circles
LawrenceC's LessWrong post challenges the community's mantra of 'saying obvious things,' revealing a hidden layer of non-obvious assumptions that create barriers. These include beliefs like supporting US chip export controls to win the AI race against China—obvious to some Bay Area insiders but controversial to many. The author notes that such assumptions make newcomers feel stupid or unwelcome, citing feedback from younger people excluded from AI Safety, EA, and Constellation circles.
The post distinguishes between known controversial topics (race, gender, SB 1047) and hidden controversial ones (America-first ethics, working for AI labs under deontological views). LawrenceC argues that stating nonobvious things as obvious can manufacture consensus, skip to 'important parts,' or even deliberately exclude. The proposed solution: both speakers and listeners adopt cheap cultural norms—speakers check their assumptions, listeners call them out charitably—to improve communication without restating every axiom.
- Hidden assumptions like America-first AI race beliefs exclude newcomers from AI Safety and EA communities
- Known controversial topics (SB 1047, interpretability) are easier to handle than hidden ones (chip export controls, lab ethics)
- Most people assume 'obvious' things non-maliciously; solution is mutual effort with cheap cultural norms
Why It Matters
For tech professionals, surfacing hidden assumptions is key to inclusive, productive debate in polarized AI communities.