Research & Papers

How Designers Envision Value-Oriented AI Design Concepts with Generative AI

18 designers show AI creates recursive tensions—and they spot harms better than benefits.

Deep Dive

A new study from the University of Washington (arXiv:2605.00280) examines how designers navigate value-oriented design when using generative AI as both tool and material. The researchers conducted concept envisioning activities and interviews with 18 experienced designers, revealing four key dynamics. First, designers engage in a reciprocal reflection-in-action loop with AI, where the tool’s outputs constantly reframe their thinking. This process surfaces multi-level value tensions—conflicts between what the AI tool prioritizes, what the designer values, and what the final concept should embody.

Second, the study found that designers are more attuned to harm recognition than to articulating positive value fulfillment. They quickly spot potential biases, exclusion, or misuse of their designs, but struggle to define what “good” value looks like. Third, designers exercise anticipatory judgment through meta-design reasoning: they think about how the AI’s own embedded assumptions (e.g., training data biases, default UI patterns) could propagate into the final concept and its real-world use. The authors extend Donald Schön’s reflection-in-action framework to this AI-mediated context, arguing that design tools need to surface value tensions explicitly and support harm-centered reasoning rather than just efficiency gains.

Key Points
  • 18 designers engaged in a reciprocal reflection-in-action loop with generative AI, revealing recursive value tensions across tool, designer, and concept.
  • Designers are significantly better at recognizing potential harms (bias, exclusion) than articulating positive value fulfillment of their AI-enabled concepts.
  • Meta-design reasoning emerges as a key practice: designers anticipate how the AI tool's assumptions could propagate into future use contexts.

Why It Matters

As AI becomes design co-pilot, tools must support harm-awareness over efficiency, reshaping how we build responsible AI.