Even the smartest coding agents have no idea when they are guessing on ambiguous instructions.
April 14, 2026
Original Paper
HiL-Bench (Human-in-Loop Benchmark): Do Agents Know When to Ask for Help?
arXiv · 2604.09408
The Takeaway
The 'universal judgment gap' proves that agents will blindly execute on incomplete specs instead of asking for help, even when they possess the raw skill to succeed. This decouples model capability from deployment safety.
From the abstract
Frontier coding agents solve complex tasks when given complete context but collapse when specifications are incomplete or ambiguous. The bottleneck is not raw capability, but judgment: knowing when to act autonomously and when to ask for help. Current benchmarks are blind to this failure mode. They supply unambiguous detailed instructions and solely reward execution correctness, so an agent that makes a lucky guess for a missing requirement will score identically to one that would have asked to