AI & ML New Capability

Moves coding agents from passive execution to proactive collaboration by teaching them when to ask for clarification on underspecified tasks.

March 30, 2026

Original Paper

Ask or Assume? Uncertainty-Aware Clarification-Seeking in Coding Agents

Nicholas Edwards, Sebastian Schuster

arXiv · 2603.26233

The Takeaway

Current agents often fail by making assumptions about missing context; this multi-agent scaffold decouples detection from execution, achieving a 69.4% resolve rate on SWE-bench Verified. It provides a blueprint for building autonomous systems that are safer and more effective in real-world, ambiguous software environments.

From the abstract

As Large Language Model (LLM) agents are increasingly deployed in open-ended domains like software engineering, they frequently encounter underspecified instructions that lack crucial context. While human developers naturally resolve underspecification by asking clarifying questions, current agents are largely optimized for autonomous execution. In this work, we systematically evaluate the clarification-seeking abilities of LLM agents on an underspecified variant of SWE-bench Verified. We propos