Uses generative world models to synthesize photorealistic, counterfactual failure data for training robot recovery behaviors.
arXiv · March 17, 2026 · 2603.13528
The Takeaway
Autonomous recovery from errors is a major bottleneck in robotics. Dream2Fix bypasses the need for dangerous real-world failure collection by perturbing successful demonstrations in a world model, enabling robots to map visual anomalies directly to corrective recovery trajectories.
From the abstract
While recent foundation models have significantly advanced robotic manipulation, these systems still struggle to autonomously recover from execution errors. Current failure-learning paradigms rely on either costly and unsafe real-world data collection or simulator-based perturbations, which introduce a severe sim-to-real gap. Furthermore, existing visual analyzers predominantly output coarse, binary diagnoses rather than the executable, trajectory-level corrections required for actual recovery.