RECAP platform reveals how devs actually use AI coding assistants
New open-source tool captures 2,034 prompts and 8,239 edits to analyze developer-AI interaction patterns.
Understanding how developers actually work with AI coding assistants has been notoriously difficult—conversation logs miss what code was tried and discarded, while git histories skip the prompts that led to specific edits. A new open-source platform called RECAP (Replay and Examine Captured AI Programming) bridges that gap. Built as a VS Code extension by researchers at Carnegie Mellon University, RECAP passively records both AI chat sessions and fine-grained code edits without interrupting the developer's workflow. It then merges these streams into a single interactive timeline that can be replayed step-by-step. On top of that, RECAP offers an extensible analysis layer with built-in modules for classifying developer behavior and measuring how much they rely on AI suggestions.
In a real-world test, the team deployed RECAP in a university software engineering course where 41 students worked on a multi-week project. The platform collected 2,034 prompts and 8,239 code edits, providing a rich dataset for studying interaction patterns. For instance, by linking each prompt to the resulting code changes, researchers can now pinpoint where developers accept, modify, or discard AI outputs—insights that single-sourced logs simply cannot offer. RECAP's source code is available on GitHub, and the paper detailing the platform's design and findings is published on arXiv. For teams looking to optimize their own AI-assisted workflows or for researchers studying human-AI collaboration, RECAP provides a much-needed window into the black box of pair programming with AI.
- RECAP is an open-source VS Code extension that passively records AI chat sessions and code edits, then merges them into a unified timeline for replay.
- In a university course deployment, it captured 2,034 prompts and 8,239 edits from 41 students over a multi-week project.
- The platform includes analysis modules for behavioral classification and AI reliance measurement, enabling insights not possible from chat logs or git histories alone.
Why It Matters
RECAP offers a systematic way to audit and optimize human-AI pair programming, making invisible workflows measurable and improvable.