What is Safety? Corporate Discourse, Power, and the Politics of Generative AI Safety
A new study reveals how AI giants control the narrative on safety to serve their own interests.
A new academic paper critically analyzes how major generative AI companies define and communicate 'safety' in their public statements. The research, using discourse analysis, finds these firms use language to establish authority, normalize experimental safety practices, and promote a participatory agenda. The authors warn that uncritically accepting these corporate narratives risks sidelining public accountability and alternative governance models focused on equity and justice in AI development.
Why It Matters
It shows that who defines 'safety' holds significant power over the future of AI regulation and ethics.