Research & Papers

Designing AI agents that know when to step back

Researchers propose three 'coordination zones' to balance AI autonomy with human control.

Deep Dive

As AI agents become capable of autonomous actions like coding, research, and customer service, a critical design challenge emerges: how to effectively coordinate the human side of the equation. Google researchers James Pierce, Siddharth Gupta, and Vaiva Kalnikaitė argue that agentic AI is fundamentally different from traditional software—it's proactive, conversational, and can make decisions. This makes designing for trust, control, and transparency essential, requiring a new framework to align what users do with what the AI is doing, both visibly and behind the scenes.

The researchers propose thinking about coordination along three dimensions: human involvement (user effort), AI salience (how prominent the AI feels), and AI activity (what it's doing). They then define three practical 'zones' of coordination. 'Done with me' is mutually collaborative, with high AI salience and human involvement, like co-writing a document. 'Done for me' is heavily automated, where the user initiates a task and reviews the output, like generating a competitor report. 'Done under me' involves discreet assistance, where AI works in the background on tasks like smart sorting or predictive text.

This framework moves beyond the binary choice of fully autonomous versus human-in-the-loop systems. Instead, it provides designers with a vocabulary and calibration points to match the intensity of human-AI coordination to the specific user, task, and context. The goal is to avoid agents that feel either too absent or too intrusive, creating experiences where AI knows precisely when to step forward and when to step back.

Key Points
  • Proposes three coordination zones: 'Done with me' (collaborative), 'Done for me' (automated), and 'Done under me' (discreet).
  • Frames coordination via human involvement, AI salience, and AI activity to design for trust and control.
  • Addresses the core UX challenge of aligning user experience with autonomous AI actions behind the scenes.

Why It Matters

Provides a crucial design blueprint for building trustworthy, effective AI agents that users will actually adopt and rely on.