Developer Tools

SWE-Edit: Rethinking Code Editing for Efficient SWE-Agent

By decoupling code inspection from modification, SWE-Edit cuts inference costs by 17.9%.

Deep Dive

Large language model agents tackling software engineering tasks have long suffered from a fundamental context coupling problem: the standard code editing interface forces agents to interleave exploratory viewing with strictly formatted edit generation within a single context window. This causes irrelevant information to accumulate, degrading agent performance. To solve this, researchers introduce SWE-Edit, which decomposes code editing into two specialized subagents: a Viewer that extracts task-relevant code on demand, and an Editor that executes modifications from high-level plans. This architecture allows the main agent to focus on reasoning while delegating context-intensive operations to clean context windows.

On SWE-bench Verified, SWE-Edit improves resolved rate by 2.1% while reducing inference cost by 17.9%. The team further investigated what makes an effective editing model, observing that the prevalent find-and-replace format is error-prone. They trained Qwen3-8B with GRPO to adaptively select editing modes, yielding improved editing efficiency over single-format baselines. Additionally, they propose a code editing benchmark that reliably predicts downstream agentic performance, providing practical guidance for editing model selection. The code is publicly available on GitHub.

Key Points
  • SWE-Edit decomposes code editing into a Viewer and an Editor, solving the context coupling problem in LLM agents.
  • On SWE-bench Verified, it improves resolved rate by 2.1% while reducing inference cost by 17.9%.
  • The team trained Qwen3-8B with GRPO to adaptively select editing modes, outperforming single-format baselines.

Why It Matters

SWE-Edit makes AI code agents both more accurate and cheaper to run, a rare win-win.