Research & Papers

Decision-Oriented Programming with Aporia

New paradigm forces AI agents to explicitly ask programmers for design decisions, creating an editable audit trail.

Deep Dive

A team of researchers from UC San Diego and the University of Washington has introduced a new paradigm called Decision-Oriented Programming (DOP) with their tool, Aporia. The core problem they address is the 'black box' nature of modern AI coding assistants like GitHub Copilot. While these agents reduce cognitive load, they silently make critical design decisions, leaving developers out of the loop and potentially creating code that doesn't match their intent. DOP brings these hidden decisions to the forefront by making them the primary medium of collaboration between human and AI.

Aporia, the design probe built to test DOP, implements this through three key mechanisms. First, it tracks all decisions in a persistent, editable 'Decision Bank.' Second, the AI agent proactively elicits decisions by asking the programmer specific design questions instead of making assumptions. Third, each decision is encoded as an executable test suite, creating a traceable link between design intent and final implementation. In a user study, this approach led to developers having a 5x more accurate understanding of their code compared to using a baseline agent. The system successfully increased engagement in the design process and helped scaffold both code exploration and validation.

Key Points
  • The Aporia system made programmers' mental models 5x less likely to disagree with the final code than a standard AI coding agent.
  • It introduces a persistent 'Decision Bank' where all design choices are stored and can be edited post-creation.
  • Each design decision is linked to an executable test suite, creating a verifiable audit trail from intent to implementation.

Why It Matters

This addresses the critical 'trust gap' in AI-assisted development, giving developers control and auditability over AI-generated code.