Research & Papers

"I Just Don't Want My Work Being Fed Into The AI Blender": Queer Artists on Refusing and Resisting Generative AI

A new CSCW study reveals queer artists are refusing to let their work train AI models, fearing cultural erasure.

Deep Dive

A team of researchers from Carnegie Mellon University and the University of Washington has published a new study, "I Just Don't Want My Work Being Fed Into The AI Blender," set to appear at CSCW 2026. The paper, based on 15 semi-structured interviews, investigates how generative AI is disrupting queer artistic communities. The core finding is a profound tension: while art-making for these individuals is a deeply relational act of political resistance, identity development, and community formation, they perceive the development and use of GenAI as fundamentally "anti-relational." This clash is driving widespread refusal and resistance.

The artists' resistance is not merely about copyright but centers on the fear of cultural and identity erasure. They see their work being absorbed into an impersonal "AI blender" that strips away the context, community, and political intent behind it. This leads to active refusal to allow their art to be used as training data for models like Stable Diffusion or DALL-E. The study notes a very limited potential role for AI, such as using surreal image models for specific aesthetic exploration, but this is the exception. Drawing on queer theory, the authors argue that Computer-Supported Cooperative Work (CSCW) researchers should support these artists by challenging dominant AI narratives and aiding in "queer world-building" that protects these vital cultural spaces.

Key Points
  • Study based on 15 interviews with queer artists reveals deep resistance to GenAI data practices.
  • Artists view AI development as "anti-relational," threatening community, identity, and political expression built through art.
  • Researchers call for CSCW to support "queer world-building" and challenge dominant AI imaginaries that enable cultural erasure.

Why It Matters

Highlights a critical ethical blind spot in AI development: the non-consensual use of culturally significant work risks homogenizing and erasing marginalized voices.