The right way to talk about LLMs
A viral LessWrong essay argues public AI fear is about jobs, not superintelligence, requiring new PR tactics.
A viral essay by Steffee on the LessWrong forum is challenging how the AI community discusses risks. The post argues that public fear is not primarily driven by philosophical arguments about superintelligent AI causing human extinction, but by more immediate concerns like widespread job displacement and the proliferation of low-quality 'AI slop.' This disconnect suggests that current advocacy focused on long-term existential risk may be failing to move the public or policymakers, necessitating a shift in narrative.
To explore this perceptual gap, the author presents three illustrative AI 'personas.' The most striking is a 'creepy therapist' from an Anthropic advertisement, which is analyzed as a company knowingly leveraging unease about AI's unsettling nature. The essay contrasts Anthropic's perceived ethical stance—highlighted by its refusal of a Pentagon deal for mass surveillance—with OpenAI's commercial dominance, questioning whether public sentiment will reward responsible actors. Ultimately, the piece is a call to develop more effective communication strategies, potentially including viral PR campaigns, to build support for slowing development and increasing regulation of AI systems.
- Public AI fear centers on job loss and 'slop,' not superintelligence extinction risks.
- The post analyzes a 'creepy therapist' ad from Anthropic to explore AI's unsettling public image.
- Advocates for new narratives and PR strategies to support responsible AI development and regulation.
Why It Matters
Effective AI policy and public acceptance may depend on addressing real, immediate fears rather than abstract long-term risks.